Innovations In Clinical Neuroscience

NOV-DEC 2017

A peer-reviewed, evidence-based journal for clinicians in the field of neuroscience

Issue link: http://innovationscns.epubxp.com/i/924986

Contents of this Issue

Navigation

Page 79 of 83

80 ICNS INNOVATIONS IN CLINICAL NEUROSCIENCE November-December 2017 • Volume 14 • Number 11–12 R E V I E W nuanced curriculums or to steer the training toward specifc areas for improvement. Avatars can be programmed with decision tree logic to serve as subjects for interview skills training. Virtual reality may be used to create a realistic assessment environment. All these technologies, and more to come, might transform traditional training and make it more useful, practical, and effective in years to come. Use of electronic clinical outcome assessment (eCOA). Another means by which newer technologies can bolster PANSS training and data quality is use of eCOA. Platforms utilizing this methodology can assess ratings for logical inconsistencies among PANSS items and between the PANSS and other scales and alert the investigator before data submission. The investigator has the option to reevaluate their rating or to maintain the original scores. eCOA also permits additional alerts and reminders to be made to the rater. For example, the PANSS rater may be prompted to include informant information when appropriate or to periodically remind the subject of the reference period. Notes to support the choice of anchor point might be required. This technology was positively received by both patients and caregivers, with minimum modification requests. 10 The capacity for audio/video recording of SCI-PANSS interviews can be embedded in the eCOA platform to facilitate deeper independent review of visits, either through an a priori plan (e.g. evaluation of every rater's first assessment) or via a risk-based approach using inconsistencies detected within PANSS data to "flag" an evaluation for review. Early detection and remediation of these data flaws is critical for study success and to prevent "rater drift." 11 Continual evaluation of the quality of a site's interviews and ratings and retraining as necessary should continue throughout all phases of the trial, just as any assay would be repeatedly monitored and recalibrated. EVALUATION OF NEWER TRAINING AND DATA MONITORING PROCEDURES There have been a number of solutions to managing rater drift during clinical trials. Remote, independent rating, 12 smaller trials with more experienced rater cohorts, 13 and a number of in-study techniques that utilize the internal logic of instruments like the PANSS have gained attention in the last decade. 3,14,15 The latter technique uses algorithms to generate flags for what is often referred to as a risk-based approach to monitoring in-study data. Algorithms can consist of logical binary or factorial relationships between one or more scale items or more sophisticated statistical techniques that leverage large clinical trial datasets with known outcome parameters. For the purposes of this article, we will limit our discussion to the sorts of binary and factorial relationships that exist within the PANSS and how these can be used to generate flags. For example, if a rater scores at the level of 7 on Item P5 (grandiosity) and then scores Item P1 (delusions) at the level of 1, this would generate a flag. This is because at the level of 7 on P5, we expect significant and pervasive grandiose delusions and, if that is the case, then the P1 should receive a similarly severe rating. While this is an extreme example (and usually related to the rater's reluctance to "double rate" the same symptom) it serves to illustrate the essential idea that the instrument relationships themselves can show us where there is a high risk for error. Another illustration comes in the form of the Marder 16 five-factor model for the PANSS (though some dispute this factor solution 8 ); in such frameworks, the expected correlations between items that represent factors can be used to detect aberrant presentations and potential risk. 5,11 For example, if we think about the negative factor that includes N1 to N4, N6, and G7 and we expect that these will be predictably correlated (within certain severity ranges), we can identify risk when one or more correlation fails to agree with the identified matrix. How are these risks are dealt with? Is it actually rater error that is present? Or is it simply a somewhat unusual patient presentation? Intervention methods differ and depend on who is leading the data-monitoring effort, but if actual rater error is responsible, this is the point at which a targeted training event takes place. It must be emphasized here that an expert clinician with a very clear understanding of the scale and the patient population must complete the training. This in-study targeted training is essential in arresting rater drift and reducing the impact of non-informative data (i.e., data that contribute little to the goal of the study but increase variance and thus the ability to detect the signal where it exists). This method has proved cost-effective, and the targeted nature of intervention requires fewer resources than interval retraining (e.g., training done every 3–6 months) for the full cohort of raters. More importantly, the reduction in non-informative data can make the difference between a failed or negative trial and one that is positive. Prospective, adequately controlled comparisons of methodologies for rater training or in-study data quality monitoring coupled with remediation are rare because sponsors are reluctant to vary methodologies within a clinical trial. The comparison of methodologies across trials is complicated by multiple uncontrolled differences in trial characteristics. That said, used in parallel, the methodologies are complimentary and can reinforce the four principles critical to obtaining reliable and valid data for the duration of a trial. Although the results must be evaluated carefully, comparisons of inter- rater reliability, nonspecific variance, placebo response, and drug-placebo differences across trials using different methodologies can be informative, if not definitive. 15 Newer interview training and scale rule training techniques can be evaluated against in-study metrics based on error rates detected by data analytics as well as via an external expert review of recorded patient interviews. The independent review of patient interviews is highly recommended for all clinical trials. It has been demonstrated that interviews that are recorded and reviewed have PANSS scores that align better with the scale requirements. 17 CONCLUSION PANSS rater training has become a standard component of most clinical trials, but true standardization with respect to the exact approaches, techniques, and standards remains elusive. For clinical trials using the PANSS, it is strongly advised that the training program incorporates the core principles described in this article and advocated by the author of the PANSS. Where possible, we also further recommend the following:1) Favor active learning techniques over passive ones, particularly for experienced clinicians and raters with meaningful prior experience using the PANSS. While some raters have persistent idiosyncrasies in their approaches

Articles in this issue

Archives of this issue

view archives of Innovations In Clinical Neuroscience - NOV-DEC 2017