Innovations In Clinical Neuroscience

NOV-DEC 2017

A peer-reviewed, evidence-based journal for clinicians in the field of neuroscience

Issue link:

Contents of this Issue


Page 77 of 83

78 ICNS INNOVATIONS IN CLINICAL NEUROSCIENCE November-December 2017 • Volume 14 • Number 11–12 R E V I E W and those persons implementing training programs to improve reliability. First principle—Read each item definition and all anchor points carefully and interpret each element as literally as possible. The process of rating PANSS items requires a very close reading of each required element. The item definition needs to be considered first to determine whether the item is applicable. If not, a score of 1 (absent) should be assigned. Any evidence suggesting the item is present should prompt a score of 2 (minimal) or higher. Particularly when determining the highest score that applies (see below), efforts should be made to not reinterpret the wording, and "impressionistic" scoring should be avoided. Terms involving "and" or "and/or" should be closely attended so as to ensure that all necessary elements are present before assigning or eliminating a score from consideration. Second principle—Always give the highest rating that applies. Very often, raters are faced with ambiguity. It might be that the answers to queries are unclear or that the information available suggests that more than one score may be applicable. A simple solution—and a "convention" frequently applied for other instruments—is to "rate up" when more than one score might be applicable. For the PANSS, a somewhat different approach is mandated, and instead of arbitrarily moving to select a score, raters should instead always give the highest score that applies based on the available information. For example, if a patient clearly meets the criteria for a score of 3 (mild) and also for 4 (moderate) on any item, as long as all the necessary criteria for both items are met, then the patient should receive a score of 4 (moderate). In the same vein, if a patient almost meets the criteria for a score of 4 (moderate), but is clearly missing some key component, then a score of 4 (moderate) cannot be assigned. Third principle—Always consider the reference period and time frame. Some patients are not always clear about the time frame under examination during an assessment. Typically, the PANSS is rated based on a "past week" reference period (i.e. the ratings are based on the most severe phenomenon for a given item in the past week). It is worth noting, however, that certain items based solely on nonverbal symptoms during the interview, such as Item N1 (blunted effect), will be rated based on the presentation the rater can observe during the interview. Patients might describe a wide range of experiences during the course of an assessment—including some that occurred more than one week ago. While that might reveal beliefs or ideation that is, effectively, still present, many time-delimited phenomena might not be impacted. For example, Item P7 (hostility) would not be directly impacted by a fight that the patient had four weeks ago when using the standard past week reference period. Fourth principle—Use all available information for rating, as long as it meets the basis for rating. Instruments developed for other disorders sometimes assume a linear progression with discrete sections compartmentalized by scale item. While the Structured Clinical Interview- PANSS (SCI-PANSS) does have some relatively discrete components, it is more likely that information relevant to rating different items may be presented at any time, possibly even well after the section on an item has been completed. Patients might also give conflicting information at different points during an interview, denying a symptom initially and then endorsing it later. While it is difficult to anticipate every combination of presentations or endorsements, raters should avoid assigning item scores during the interview and should instead wait until the assessment is complete and all necessary information (including informant data) is collected. At the conclusion of the assessment, all information that is relevant and meets the basis for ratings should be taken into account in the final determination. Notably, there are several controversies that have arisen over the years with regard to the proper use of the PANSS. While the following items do not comprise an exhaustive list, they still highlight some of the challenges that raters should consider and develop techniques and strategies to address. Is collateral (informant) information required to rate the PANSS? Two items in the PANSS (N4 and G16) are rated solely on the basis of information meant to be gathered from an informant such as a caregiver or a treating clinician who has had significant contact with the patient during the reference period. It is sometimes challenging to obtain sufficient information to cover all of the required areas, but raters are first instructed to do their best to obtain the necessary information from a third party. In the absence of any available independent person to query, the rater may use records of various sorts in order to gain insight into behaviors during the past week. Is adherence to the SCI-PANSS necessary or is a general clinical psychiatric interview sufficient to obtain information for the purpose of rating? Most clinical trials now mandate the use of the SCI-PANSS. Lindstrom 3 and others 4 have demonstrated that high reliability can be generated between raters using the SCI-PANSS. 1 While the SCI-PANSS could be improved upon—and could be in future iterations—it is necessary to have a standardized approach to assessment across visits, patients, and investigators so as to help improve reliability. Additionally, the SCI-PANSS is designed to help ensure that all necessary domains of inquiry are addressed. It is important, however, to remember that the SCI-PANSS is intended to be used as semi-structured interview guidelines rather than a rigidly conducted script. Rewording, rephrasing, and other techniques to help improve patient comprehension can and should be engaged when applicable. Additionally, there might be instances in which it is beneficial to change the order of the questions. For example, a disorganized and challenging patient might spontaneously begin talking about hallucinatory experiences. A rater might then determine that it is clinically advisable to take advantage of the opportunity to explore this symptom further rather than attempting to redirect the interview at that point. Is it necessary to use the anchoring points if the patient is quite severe across an entire domain (e.g. positive symptoms)? Less experienced clinicians and raters are often over-impressed by psychotic symptoms and appear to rely less on the anchor points in these instances. While it is tempting to "save time" by assigning blanket scores for items impressionistically, such an approach fails to meet the standards for reliable use of the PANSS. Raters are urged to carefully reach each item and assign the highest score that applies on the basis of the written anchors.

Articles in this issue

Archives of this issue

view archives of Innovations In Clinical Neuroscience - NOV-DEC 2017