Frontiers in Aging Neuroscience (Jun 2022)

Avoid or Embrace? Practice Effects in Alzheimer’s Disease Prevention Trials

  • Andrew J. Aschenbrenner,
  • Jason Hassenstab,
  • Guoqiao Wang,
  • Yan Li,
  • Chengjie Xiong,
  • Eric McDade,
  • David B. Clifford,
  • Stephen Salloway,
  • Martin Farlow,
  • Roy Yaari,
  • Eden Y. J. Cheng,
  • Karen C. Holdridge,
  • Catherine J. Mummery,
  • Colin L. Masters,
  • Ging-Yuek Hsiung,
  • Ghulam Surti,
  • Gregory S. Day,
  • Sandra Weintraub,
  • Lawrence S. Honig,
  • James E. Galvin,
  • John M. Ringman,
  • William S. Brooks,
  • Nick C. Fox,
  • Peter J. Snyder,
  • Kazushi Suzuki,
  • Hiroyuki Shimada,
  • Susanne Gräber,
  • Randall J. Bateman

DOI
https://doi.org/10.3389/fnagi.2022.883131
Journal volume & issue
Vol. 14

Abstract

Read online

Demonstrating a slowing in the rate of cognitive decline is a common outcome measure in clinical trials in Alzheimer’s disease (AD). Selection of cognitive endpoints typically includes modeling candidate outcome measures in the many, richly phenotyped observational cohort studies available. An important part of choosing cognitive endpoints is a consideration of improvements in performance due to repeated cognitive testing (termed “practice effects”). As primary and secondary AD prevention trials are comprised predominantly of cognitively unimpaired participants, practice effects may be substantial and may have considerable impact on detecting cognitive change. The extent to which practice effects in AD prevention trials are similar to those from observational studies and how these potential differences impact trials is unknown. In the current study, we analyzed data from the recently completed DIAN-TU-001 clinical trial (TU) and the associated DIAN-Observational (OBS) study. Results indicated that asymptomatic mutation carriers in the TU exhibited persistent practice effects on several key outcomes spanning the entire trial duration. Critically, these practice related improvements were larger on certain tests in the TU relative to matched participants from the OBS study. Our results suggest that the magnitude of practice effects may not be captured by modeling potential endpoints in observational studies where assessments are typically less frequent and drug expectancy effects are absent. Using alternate instrument forms (represented in our study by computerized tasks) may partly mitigate practice effects in clinical trials but incorporating practice effects as outcomes may also be viable. Thus, investigators must carefully consider practice effects (either by minimizing them or modeling them directly) when designing cognitive endpoint AD prevention trials by utilizing trial data with similar assessment frequencies.

Keywords