Innovations In Clinical Neuroscience

NOV-DEC 2017

A peer-reviewed, evidence-based journal for clinicians in the field of neuroscience

Issue link: https://innovationscns.epubxp.com/i/924986

Contents of this Issue

Navigation

Page 78 of 83

79 ICNS INNOVATIONS IN CLINICAL NEUROSCIENCE November-December 2017 • Volume 14 • Number 11–12 R E V I E W In cases in which the local definition of an item/concept differs from the one shown in the PANSS rating criteria, may the local alternative be substituted? Different disciplines and fields of study can variably define common concepts (e.g. delusions). In clinical practice, these approaches might have significant value to treatment of patients in a local context; for example, if a culturally influenced explanation of a symptom that is acceptable to the patient and his/her family needs to be explored and acknowledged by the treating clinician to facilitate communication and adherence with treatment, then this is of great value to all stakeholders in that context. 5 However, within the confines of a clinical trial, particularly one that is multi-site and/or global in nature, the need for standardization across visits, sites, and regions for the purposes of research necessitates that all raters adhere to the common definitions of terms without substitution. IMPLEMENTATION OF TRAINING Traditionally, rater training for the PANSS involved raters attending an investigator meeting for each clinical trial, where they would sit classroom style, listen to a slide- based lecture, view videotaped interviews, and rate them through an audience response system. Outlying scores were discussed with the goal of optimizing inter-rater reliability. Certification was based on scoring an agreed- upon percentage of items with fidelity to the "gold standard." At a mid-study investigator meeting intended to prevent rater drift, raters would review a slide lecture and rate an additional videotaped interview, and were remediated if their scores were outside the "gold standard." 6 Limitations of traditional training. Such methodologies were capable of achieving and maintaining high levels of reliability and have effectively remained unchanged since the original Phase III studies of risperidone in the 1990s. 7 However, the limitations of this methodology have become apparent and are as follows: 1) raters working on multiple trials are sometimes subjected to repetitive training that does not take their individual issues in PANSS rating into account; 2) rating a video- taped interview does not address the correct assessment technique and the ability to elicit information from a psychotic patient; 3) training should be relevant and individualized to the specific clinical trial; and 4) a rater's behavior in the laboratory of an investigator meeting does not necessarily reflect the rater's behavior while at his or her site rating patients. 8 Interactive training. PANSS training is rapidly evolving to address the above issues. Increasingly, traditional, passive, classroom-style training is being replaced with interactive, case-oriented methods that require active participation from investigators. For example, in the "roundtable approach," investigators are organized in small groups, often by site and nationality. Instead of a long repetitive lecture, there is a short review of the basic principles of rating followed by case discussions. Within each group, raters come to a consensus with their colleagues from their sites and countries. This synchronizes a rating methodology within a site and prevents "noise in the ratings" when raters cross-cover for each other. The session is moderated by an appropriately qualified trainer who is capable of synthesizing the various points of view and who is tasked with ensuring compliance to core principles and gold-standard approaches. There are many variations in this methodology but they share the concepts of active participation and consensus-building to replace passive listening. In the past, the centerpiece of training for both beginner and advanced raters were lengthy, item-by-item ratings of full, unselected PANSS interviews. The current trend for experienced raters is to teach with shorter vignettes targeting relevant areas of study design, such as the population under study (e.g. acutely psychotic, prominent positive symptoms, predominant negative symptoms, stable, treatment resistant), change from baseline, and difficult to rate symptoms. Assessment technique. Interview skill assessment and feedback has become integral to PANSS training and addresses the ability of the rater to probe the population under study sufficiently so as to distinguish among the anchor points of each item in a neutral manner unlikely to induce a placebo response. This is most effective when using highly trained live actors who challenge the investigator with scripted foils. Certification procedures. In the past, certification to administer the PANSS was commonly based on the successful rating of a videotaped PANSS interview. However, this is a passive procedure that fails to assess the investigator's ability to deliver a thorough and unbiased interview. It is critical to standardize both the interview technique and measurement skills. A newer procedure for certification is to require candidates to successfully interview and measure the symptom severity of highly synchronized actors portraying patients with psychotic disorders. The use of quantified approaches to the evaluation of interview technique has been linked with data quality and signal separation, making this "active" evaluation a more relevant and meaningful approach to certification. 1 Videotaped interviews are more commonly used than actors to evaluate assessment technique and scoring, in part because video recording is more resource-intensive than training and synchronizing actors in multiple languages and bringing them to investigator meetings. For the most part, raters with sufficient credentials and experience administering the PANSS to the population under study are certified if they meet certain standards of accuracy and precision with their measurement of the individual PANSS items and the PANSS total, based on both gold standards and statistical outliers. To accelerate the rater approval process, decisions regarding success or failure of the candidate as well as remediation may be delivered at the investigator meeting. Like any assay, the measurement of psychotic symptoms must be periodically recalibrated. Intra-rater and inter-rater reliability should be assessed and remediated regularly. IMPAC T OF TECHNOLOGY Technology has provided vibrant, efficient alternatives to expensive, potentially inefficient in-person, multi-country investigator meetings. Initial training, as well as mid-study refresher training, may occur by use of "live" web conferencing, essentially recapitulating the interaction of an investigator meeting, or in an "on-demand" manner, either online or application-based. 9 Adaptive and risk-based methods may be applied to individualize PANSS training to triage a rater to more basic or advanced

Articles in this issue

Archives of this issue

view archives of Innovations In Clinical Neuroscience - NOV-DEC 2017