Innovations In Clinical Neuroscience

JAN-FEB 2017

A peer-reviewed, evidence-based journal for clinicians in the field of neuroscience

Issue link: http://innovationscns.epubxp.com/i/796206

Contents of this Issue

Navigation

Page 25 of 63

Innovations in CLINICAL NEUROSCIENCE [ V O L U M E 1 4 , N U M B E R 1 – 2 , J A N U A R Y – F E B R U A R Y 2 0 1 7 ] 26 treatment-related changes. Added benefits m ay be that these more robust outcome measures will reduce the estimated sample sizes for clinical trials and otherwise improve the efficiency of the analysis of trial data. 55,64,65,71,72 However, a clear c onceptual basis is needed prior to undertaking such work (e.g., a belief that items may be combined on the basis of best estimating a new unidimensional underlying construct such as disease progression). Without such clarity, concerns may persist regarding clinical meaningfulness and interpretation of such scores and difficulty in understanding what is being measured. One approach that has been used to create composite endpoints is to identify and combine a reasonable set of items based on their face validity given our knowledge of the disease state and the concept we are attempting to estimate or measure (e.g., disease progression). Again, issues around content validity and conceptual basis should be considered here as mentioned above. Confirmatory factor analysis is often used to eliminate redundant items and to ensure that there are items that represent different levels of function within each domain of interest. Item selection can be challenging when cross-sectional data are used because 1) the disease is degenerative, and elements of functioning that represent the core disease process change over time, and 2) there is substantial measurement error because individuals with the disease manifest day-to-day variability in cognitive and functional performance, particularly when more severely impaired. The first point can be addressed by extracting items for the composite from longitudinal studies so that decline over time can be included as a method of confirming which items are relevant to disease progression (depending on the length of the trial from which the items were extracted and the expected amount of decline). The second point can be addressed by measuring the same domain with more than one assessment tool (i.e., with a construct validity approach) to potentially reduce measurement error. Once appropriate items are identified, they are combined in some way (e.g., summing across standardized scores for each item) to form the composite. Another approach to the development o f composite outcomes is to use principal component analysis (PCA) with all available items and let the results determine which items best represent a domain, as indicated by a high correlation w ith one of the factors identified by the model. These factors represent the empirical consensus of the items; thus, domains with more items are likely to emerge as primary factors in the model and under-defined domains may be error variance. As is true for the face-validity approach, different results might be obtained from a PCA depending on whether cross-sectional or longitudinal data are used in the analysis. Using baseline measures could yield a PCA solution with a first principal factor that includes both items that are sensitive to decline and items that are insensitive to decline. In contrast, the first principal component of an analysis of change scores across domains should reflect the group of items most sensitive to change over time regardless of their baseline factor structure. A third approach to composite development focuses on maximizing sensitivity to decline by using the mean to SD ratio (MSDR) of change scores. 57 MSDR change scores are similar to standard scores in that scales with different measurement metrics can be combined. This approach is quite amenable to large-scale combination approaches, and original analyses were based on an exhaustive search (i.e., brute force) method where data from multiple clinical trials with multiple outcome measures were combined into a single database. The MSDR approach can also be implemented directly with a reduced rank regression model with time as the outcome. Development of a partial least- squares regression model that represents a compromise between principle components regression and reduced rank regression models allows simultaneous consideration of both the time factor and factors identified through convergence of the items. Several companies have launched trials with new composite outcome measures that were developed through a combination of empirical (based on sensitivity of individual items) and construct validity (based on opinions of e xperienced clinicians and neuropsychologists) approaches. 58,71,74 Edland et al 71 have taken a different tactic by attempting to optimize existing outcome measures. Using data from the A lzheimer's Disease Neuroimaging Initiative, these investigators modeled data from a scale for activities of daily living (ADL), and reset the scoring paradigm using an IRT approach. They have shown that sample size estimates using this optimized outcome are reduced by approximately 17 to 20 percent. 71 This approach makes the assumption that the targeted population will have a similar range of multivariate functioning as the "training set" (i.e., the ADNI cohort) used to adjust the scale. It also requires a very large database to serve as the training dataset. In addition, power calculations based on measures optimized in this manner include terms for the fixed and random variance, both of which are determined by the choice of outcome measure and the target population of the trial. Additional work with this measure using data from a clinical trial identified a learning effect during the first six months of the trial and suggested the need for a single-blind, run-in phase for three months. Including this run-in phase while not changing the overall 18-month length of the study reduced retesting effects and halved the sample size needed to show the same amount of decline on the outcome assessment. Another way to possibly extract increased sensitivity from existing measures is to reanalyze existing trial data to gain a better understanding of the sources of variability in cognitive outcome measures that may mask true drug effects. One specific direction, for example, might be to remove items that contribute to error variance in outcome tests such as the ADAS-Cog and reassess drug-versus- placebo effects over time. It may be the case, for example, that subjective ratings of language and memory made by the examiner on the ADAS-Cog are susceptible to rater-to-rater and/or test session-to-test session variability that add noise to the signal of change over time. These subjective ratings are scored on a 0 to 5- point scale and can account for about 25 percent of the ADAS-Cog total score. It has

Articles in this issue

Archives of this issue

view archives of Innovations In Clinical Neuroscience - JAN-FEB 2017