Innovations In Clinical Neuroscience

JAN-FEB 2017

A peer-reviewed, evidence-based journal for clinicians in the field of neuroscience

Issue link: http://innovationscns.epubxp.com/i/796206

Contents of this Issue

Navigation

Page 24 of 63

[ V O L U M E 1 4 , N U M B E R 1 – 2 , J A N U A R Y – F E B R U A R Y 2 0 1 7 ] Innovations in CLINICAL NEUROSCIENCE 25 detect change. 64 Furthermore, the r esponse categories for some of the items do not work as intended, and this contributes to the inability of several subtests to discriminate subtle changes in cognition. The limited utility of the A DAS-Cog in mild AD, MCI, and less impaired states was further confirmed by a Rasch analysis 65 of the experimental measurement paradigm, Andrich. 6 6 Taken together, these findings indicate that the ADAS-Cog is not optimal for measuring cognitive decline in clinical trials targeting individuals in mild AD, MCI, or preclinical states. Similar problems with floor and ceiling effects and inability to detect cognitive change in MCI and preclinical AD also occur with two other measures that are commonly used in AD clinical trials: the Mini-Mental State Exam (MMSE) and the Clinical Dementia Rating (CDR) scale sum of boxes (CDR- SB). In a large database of individuals with MCI (n=2,551) or various forms of dementia (n=4,796), MCI cases with a CDR global score of 0.5 had a mean CDR-SB score of 1.30 (SD=1.16) (out of a possible total of 18) versus a CDR-SB score of 0.11 (SD=0.36) in healthy elderly controls. In the same sample, MMSE scores were 27.2 (SD=2.3) for MCI cases (out of a possible total of 30) compared to 28.9 (SD=1.3) for the healthy elderly controls. 6 7 These findings suggest that the MMSE and CDR-SB lack sensitivity to reliably differentiate individuals with MCI from healthy individuals and could indicate they are not optimal outcome measures in clinical trials involving MCI or preclinical AD, though they do show sensitivity to decline in longer-term studies in these patients. 59,68 Indeed, such issues can be handled in current study design by sample size and trial duration, and improving outcomes should have the added benefit of increasing trial efficiency. Discussion of this issue is expanded considerably by Harvey et al, 43 but the take-home point from the perspective of outcomes measurement and clinical trial design is that different assessments than the ADAS-COG, MMSE, and CDR-SB, and which are likely performance-based, will be required for clinical trials targeting individuals with MCI or p reclinical AD. PSYCHOMETRIC REQUIREMENTS FOR DETECTING CHANGE IN MINIMALLY IMPAIRED SAMPLES T here are now many more statistical, psychometric, and methodological tools available to validate potential cognitive outcome measures than those originally used to validate the ADAS-cog. 54 Input from statisticians and psychometricians is critical at all stages of scale development as new measurement strategies are developed for clinical trials across the expanded AD spectrum. This input should include 1) ensuring content validity (this may constitute establishing a sound conceptual basis in the case of a cognitive test, or ecological validity on the case of a performance-based outcome, and where concept elicitation is not feasible), 2) verifying the best question-answer combinations to suit the aspect of cognition or clinical function being assessed, 3) calculating sample size, 4) selecting models to evaluate various aspects of the scale, and 5) selecting among the various options for the final assessment tool and its accompanying analysis guidelines. Statisticians will need to work closely with neuropsychologists, epidemiologists, psychometricians, and clinical trial methodologists to develop outcomes that best reflect changing cognitive and functional status of the individuals assessed, and will need to contribute to understanding the clinical meaningfulness of these outcome measures. 69,70 A key consideration that has been poorly addressed by traditional psychometric methods of scale development is the need to ensure that scales are capable of measuring the range of cognitive performance that will be exhibited by the subjects at study entry and over time. This includes avoiding floor and ceiling effects and ensuring that there are enough possible responses within the metric in order that change can be visualized while minimizing possible false effects resulting from normal fluctuations in cognition. This latter consideration is particularly important when participants are selected for being apparently healthy (i.e., without clinically notable cognitive deficits at the time of study entry). A central feature of outcomes assessments in clinical trials will be the ability to accurately identify very subtle declines and separate these from the expected stability in performance that w ould accrue from successful treatment in a prevention trial. In order to identify stability (expected in the case of successfully treated participants in the active treatment group) and decline (expected in some proportion of the placebo group), a scale needs to cover a wide range of functioning with gradations that enable precise measurement across the spectrum of disease severity, including those in the apparently healthy range. An important complication here is that some participants in these studies are not likely to develop a cognitive disorder (i.e., not all participants receiving placebo treatment will worsen). Thus, change will be detectable in only a subset of placebo- treated patients. STATISTICAL STRATEGIES TO INCREASE SENSITIVITY IN EXISTING MEASURES As the above discussion makes clear, detection of change in treatment trials of prodromal and preclinical AD has a number of highly specific requirements. Thus, it may take time to develop completely new scales to measure subtle changes in very early AD. However, treatment development efforts are ongoing even though existing measures are challenged in terms of detecting potential change in mildly impaired populations. To address these challenges, alternative strategies have been adopted to examine existing measures in nontraditional ways. Some options used to improve the usefulness of existing scales are 1) the creation of alternate and empirically derived composite outcomes that provide robust and sensitive combinations of existing items and 2) use of item-level analysis in the form of item-response theory (IRT) and Rasch analysis to understand the contribution of scale items that best assess an underlying latent variable (in this case, cognition or function). These strategies are not mutually exclusive, and multiple methods can be applied to the same set of items to identify the most robust way to identify

Articles in this issue

Archives of this issue

view archives of Innovations In Clinical Neuroscience - JAN-FEB 2017