Innovations In Clinical Neuroscience

2015 Abstracts of Poster Presentations

A peer-reviewed, evidence-based journal for clinicians in the field of neuroscience

Issue link: http://innovationscns.epubxp.com/i/631092

Contents of this Issue

Navigation

Page 8 of 17

[ V O L U M E 1 2 , N U M B E R 1 1 – 1 2 , N O V E M B E R – D E C E M B E R 2 0 1 5 , S U P P L E M E N T C ] Innovations in CLINICAL NEUROSCIENCE 9 CGI, PDQ, WLQ, MADRS, and UPSA. Construct validity was examined via baseline correlation analyses of the UPSA Composite Score ("UPSA," c omprising UPSA and UPSA-B in English- and non-English-speaking patients, respectively) and clinical outcomes. Anchor-based (CGI-I ≤2) and distribution-based (one-half SD) analysis methods were used to establish a responder definition. Results: The mean UPSA score at baseline was 77.8 (SD=12.89). Significant baseline correlations (p<0.05) were observed between the UPSA and duration of current MDE (r=0.10), age (r=-0.13), education (r=0.28), DSST (r=0.36), and WLQ (r= -0.17), but not MADRS or PDQ. MADRS only correlated (p<0.05) with duration of current MDE (r=0.13) and PDQ (r=0.32). The anchor-based approach resulted in an estimate of +6.7 for a responder definition on the UPSA, which was supported by the distributional- based approach (mean +6.1). Conclusion: UPSA correlated with cognitive performance (DSST) and workplace productivity (WLQ) but not mood (MADRS) or subjective cognitive functioning (PDQ), supporting the construct validity of UPSA for functional capacity in MDD independent of mood symptoms. Anchor-based and distributional analyses suggest a +7-point improvement in UPSA as a responder definition for treatment response. Disclosures: Harvey PD has served as a consultant to AbbVie, Boehringer Ingelheim, Forum Pharma, Genentech, Lundbeck Pharma, Otsuka America, Roche Pharma, Sanofi, Sunovion, and Takeda Pharma in the past three years. Jacobson W owns stock in Takeda, Pfizer, and United Health and is a full-time employee of Takeda Development Center Americas. Merikle E, Zhong W, and Nomikos G are full-time employees of Takeda Development Center Americas. Christina Kurre Olsen is a full-time employee and stock owner in H. Lundbeck A/S. Christensen MC is a full-time employee of H. Lundbeck A/S. Funding provided by H. Lundbeck A/S and Takeda Pharmaceutical Company, Ltd. T he Symptom of Trauma scale (SOTS): a new tool for measuring symptom severity in PTSD Presenters: 1 , 2 Opler M, 3 Ford J, 4 Mendelsohn M, 5 Opler L, 4 Kallivayalil D, 4 Levitan J, 5 Pratts M, 6 Muenzenmaier K, 7 Shelley AM, 1 Grennan M, 4 Lewis Herman J Affiliations: 1 ProPhase LLC; 2 New York University, School of Medicine; 3 University of Connecticut Health Center; 4 Victims of Violence Program, Cambridge Health Alliance/Harvard Medical School; 5 St. Joseph's Hospital Health Center; 6 Albert Einstein College of Medicine; 7 Accretive, LLC Background: The Symptoms of Trauma scale (SOTS) is a 12-item, interview-based clinician rating measure assessing the severity of a range of trauma-related symptoms. Objective: This pilot study evaluated its use and psychometric properties in an outpatient setting providing treatment to survivors of chronic interpersonal trauma. Methods: Thirty participants completed self-report measures of posttraumatic stress symptoms, depression, dissociation, self-esteem, and affect dysregulation, and separately participated in a semi- structured interview based on the SOTS, which was conducted by two trained interviewers. Results: SOTS composite severity scores for DSM-5 PTSD and PTSD dissociative sub-type, ICD-11 complex PTSD (sPTSD), and total traumatic stress symptoms generally had acceptable internal consistency (Alpha=0.70–0.73) and interrater reliability (ICC=0.88–0.95). Evidence of convergent and discriminant validity was found for the SOTS composite PTSD scores, although potential limitations to validity requiring further research and measure refinement were identified for the SOTS composite cPTSD score and the hyperarousal, affect dysregulation, and dissociation items. Conclusion: Both interviewers and interviewees described the interview as efficient, informative, a nd well tolerated. Implications for clinical practice and research refinement of the SOTS are discussed. RATER TRAINING AND RATER ASSESSMENT Examining the impact of ongoing assessment feedback on site rater performance: does our work matter? Presenters: Baldwin K, Avrumson R, Cohen E, Friedmann B, Carbo M, Glaug N, Komorowsky A, Rapsomaniki E, Rock C, Murphy M Affiliations: Worldwide Clinical Trials Background: Rater training companies provide ongoing data surveillance to ensure scale administration, scoring, and protocol parameters are maintained. However, there is a paucity of data exploring whether site raters improve with ongoing data surveillance. Objective: While Targum (2006) and Busner et al. (2012) demonstrated the effectiveness of ongoing training in reducing overall rater errors within industry- sponsored clinical trials, the current study adds to the literature by examining whether external rater feedback impacts individual rater accuracy as well as protocol adherence. Methods: Data from a global 26 week clinical trial evaluating negative symptoms and cognitive function in outpatient schizophrenia subjects were evaluated retrospectively. Previously qualified and credentialed site raters submitted all screening and baseline diagnostic and symptom severity scales to external, expert clinicians who reviewed the scales to detect raters' errors based on their not following scales' administration and scoring conventions and protocol instructions.

Articles in this issue

Archives of this issue

view archives of Innovations In Clinical Neuroscience - 2015 Abstracts of Poster Presentations