Innovations In Clinical Neuroscience

2015 Abstracts of Poster Presentations

A peer-reviewed, evidence-based journal for clinicians in the field of neuroscience

Issue link:

Contents of this Issue


Page 10 of 17

[ V O L U M E 1 2 , N U M B E R 1 1 – 1 2 , N O V E M B E R – D E C E M B E R 2 0 1 5 , S U P P L E M E N T C ] Innovations in CLINICAL NEUROSCIENCE 11 Acute—short term with acutely exacerbated subjects, Negative— focusing on negative symptoms, Maintenance—relapse prevention, and N on-acute—stable, non-acute, non- predominantly negative subjects. For each study type we calculated the proportions of IS (all 30 PANSS items identical across consecutive visits) and tested for significant differences between study types using the chi- square test. Results: The dataset consisted of 53,941 visits (Acute—11,804; Maintenance—5,458; Negative— 17,179, and Non-acute—19,500). The mean (SE) percent of IS by study type were Acute: 2.21 (0.14); Maintenance: 2.89 (0.23); Negative: 6.19 (0.18); and Non-acute: 4.10 (0.14), respectively. The chi-square test (chi 2 (3)=306, p<0.001) indicated significant association between study type and presence of IS. Conclusion: The highest proportion of IS occurred in the negative and non-acute studies where raters might have anticipated less change. The high percentages of IS are indicative of raters in large number of cases failing to detect the expected changes in subjects' symptom severity. This may represent a serious problem for potential drug placebo separation. Reasons for this inability to detect change may be multiple, ranging from PANSS not being an appropriate scale for these types of studies, inadequate raters' understanding of PANSS items, expectation bias, up to rating sloppiness or misconduct. In any instance of IS present, the individual case should be carefully examined and a tailored remedial plan should be developed. Disclosures: Dr. Kott and Dr. Daniel are full time employees of Bracket Global, LLC. A methodology for evaluating clinical trial sites and raters based on performance data Presenters: Miller D, Feaster HT, Allen S, Gratkowski H, Butler A Affiliations: Bracket Global, LLC Objective: A methodology was developed to evaluate historical experience and clinical outcome administration performance data as a mechanism to enhance site and rater selection. M ethods: A proprietary database of sites and raters who had participated in recent clinical trials (trailing 3 years) was compiled, with sites and raters required to have a minimum number of trials. The database included historical experience with ratings scales, performance data on certification programs, and performance data based on quality assurance programs implemented to ensure quality ratings were performed during a clinical trial. Quality assurance measures included surveillance methodologies. Each rater's experience, certification and quality assurance measures were assigned weighted numerical values. Each rater at a site contributed to the site's overall score. Once sites were categorized, a clinical review was conducted of the data and rankings based on experience with sites and raters as well as overall performance. Results: A total of 13,600 unique raters covering 2,195 research sites in 49 countries and across 21 different clinical trials were evaluated. Performance data included 27,277 scale administrations resulting in 10,188 rater contacts for potential quality assurance issues. A total of 4,352 of the contacts resulted in remedial action. A total of 2,198 sites were given a final classification based on those criteria. A total of 1,223 (56%) were classified as "Recommended," 873 (40%) were classified as "Moderately Recommended," and 102 (5%) were classified as "Not Recommended." From the pool of evaluated sites, one site that was classified as "Not Recommended" was included in the trial. A total of 77 sites classified as "Recommended" were selected, and 20 "Moderately Recommended" sites were selected. A total of 25 sites that were not previously evaluated were also selected. Conclusion: Systematic tracking and evaluation of experience and performance data are routinely utilized to assist in clinical trial site feasibility processes. These data frequently rely heavily on past patient recruitment and site data m onitoring outcomes, and rarely proactively reference past clinical ratings performance data. Utilizing these data may be useful in identifying the highest quality clinical trial sites and raters to conduct future research programs. STUDY PROTOCOL AND TRIAL METHODOLOGY Individual PANSS items and study population drive the PANSS/CGI relationship Presenters: Daniel D, Kott A Affiliations: Bracket Global, LLC Background: We have previously reported a mean of 8.0-percent aberration rate in the expected CGI- I/PANSS change correspondence in a review of 45,566 visits in 14 schizophrenia protocols conducted in 39 countries. Objective: In the current analysis, we evaluated which PANSS items drive the CGI and how the CGI/PANSS relationship varies among acute, stable, and negative symptom clinical trial populations. Methods: The data were drawn from thirteen multicenter schizophrenia clinical trials, including 41,576 (9,252 acute, 15,285 negative, 4,560 maintenance, and 12,479 non- acute) data points. We analyzed the relationships between changes in CGI-S and the total PANSS, individual PANSS items, subscales, and factor scores using Spearman's rank correlations by type of study population. Fisher's Z test was utilized to evaluate differences between the obtained correlations. Results: Statistically significant correlations were observed between change in individual PANSS items and the change in CGI-S in all study types. In acute, maintenance, and non-acute studies, the strongest correlation with the change in CGI-S

Articles in this issue

Archives of this issue

view archives of Innovations In Clinical Neuroscience - 2015 Abstracts of Poster Presentations