Innovations In Clinical Neuroscience

2015 Abstracts of Poster Presentations

A peer-reviewed, evidence-based journal for clinicians in the field of neuroscience

Issue link: http://innovationscns.epubxp.com/i/631092

Contents of this Issue

Navigation

Page 9 of 17

Innovations in CLINICAL NEUROSCIENCE [ V O L U M E 1 2 , N U M B E R 1 1 – 1 2 , N O V E M B E R – D E C E M B E R 2 0 1 5 , S U P P L E M E N T C ] 10 Results: Data were derived from 27 raters across 27 centers in 137 patients over 217 visits. Statistically significant findings were observed for t he effect of feedback on rater accuracy (ANOVA; p<0.0001). Based on a mixed model for repeated measures (with number of errors logarithmically transformed) the number of errors per rater was 4.0 [95% CI, 2.7, 5.8] before feedback, and 1.2 [1.0, 1.5] after feedback, representing a statistically significant reduction of 2.8 [1.7, 4.3] errors per visit per rater. Conclusion: Though a causal relationship cannot be inferred without a concurrent control group, results suggest a significant relationship between ongoing assessment feedback and rater performance. Disclosures: All authors are full- time employees of Worldwide Clinical Trials and have no conflicts of interest. Improving data quality through an integrated approach to eClinical data collection, rater training, and rater performance monitoring Presenter: Dallabrida S Affiliation: ERT Background: Rater training has become an essential step in protecting the quality of clinician- reported (ClinRO) and observer- reported outcome (ObsRO) data. Objective: This poster evaluates current methods of rater training and describes a new generation of rater training initiatives specific to electronic data collection. Methods: To evaluate current modes of rater training for paper instruments, we conducted literature searches, conducted secondary market research on global rater training providers and interviewed professional rater trainers to identify process inefficiencies and instructional design shortcomings. Results: The current paradigm for rater training revealed repetitive instruction, inconsistent proof of rater training, an inability to gate raters from clinical subjects, and an extraordinary length of time required by raters to simultaneously execute the paper instrument while following i nstrument instructions. It was determined that these processes could not be replicated efficiently for the electronic implementation of instruments. Conclusion: A new model for electronic rater training has been developed and tested to consolidate rater instruction and clinical outcome data within the same database. It unifies rater training with customized instruction on the instrument design and scoring, nuances of electronic implementation, and interviewing techniques. The new model includes rater performance monitoring to detect treatment effect, rater inconsistencies, initiate remediation, and gate raters to study participants. Disclosures: The presenter is an employee of ERT. Is a computer-simulated rater good enough to administer the Hamilton depression rating scale in clinical trials? Presenters: Sachs G, DeBonis D, Wang X, Epstein B Affiliations: Bracket Global, LLC Background: Are Hamilton depression rating scale scores obtained by a computer simulated rater within the range expected from site-based raters? Objectives: Concern about the high rate of failed clinical trials and the costs associated with trials' experimental treatments fuel the desire for better measurement techniques. Methods: The presenters developed a computer-simulated rater (CSR) based on the scripting and rules taught to site-based raters training to administer and score the HDRS-24. Blinded data were harvested from a double-blind, placebo-controlled, industry- sponsored study. At each study visit that required the site-based rater to administer the HDRS-24, the CSR administered the HDRS-24 as a separate independent rating. The CRS conducts an interactive interview directly with the study subject. An interview algorithm selects probe questions based on the subject's l ast response, and a scoring algorithm maps the subject's responses to a unique anchor point. Site-based raters administering the HAM-D had to meet sponsor requirements for experience and education. Results: Results were obtained from the Bracket HAM-D-24 blinded study dataset, which included 737 subjects, 112 raters, and 3,180 administrations of the paired rater and computer interviews made over the course of a 16-week, double-blind, placebo-controlled clinical trial. SBR and CSR produced similar mean scores across all time points examined in an actual global RCT. ICCs ranged from 0.60 at the baseline visit to 0.85 at late study visits. Conclusion: It is unclear whether the improvement in the ICC observed after randomization reflects subject practice effects, changes in variance over the course of the study, or alteration in rater or respondent behavior after determination of eligibility. Computer-administered scales may offer important advantages not because a CSR is a better than the average site-based rater, but because the computer is consistent, fast, and frugal. By simulating the judgment of a human rater the CSR offers an alternative to reliance on self-report measures. Disclosures: All authors are full time employees of Bracket. Identical scorings of the PANSS vary significantly between different study types Presenters: Kott A, Daniel DG Affiliations: Bracket Global, LLC Objective: To examine whether there are differences in the presence of identical scorings (IS) between different study types. Methods: We have analysed data pooled from 13 double-blind schizophrenia clinical trials. We defined these four study types:

Articles in this issue

Archives of this issue

view archives of Innovations In Clinical Neuroscience - 2015 Abstracts of Poster Presentations