Frontiers in Psychiatry (May 2018)

Validity of the QUADAS-2 in Assessing Risk of Bias in Alzheimer's Disease Diagnostic Accuracy Studies

  • Alisson Venazzi,
  • Walter Swardfager,
  • Walter Swardfager,
  • Benjamin Lam,
  • Benjamin Lam,
  • Benjamin Lam,
  • José de Oliveira Siqueira,
  • Nathan Herrmann,
  • Nathan Herrmann,
  • Hugo Cogo-Moreira,
  • Hugo Cogo-Moreira

DOI
https://doi.org/10.3389/fpsyt.2018.00221
Journal volume & issue
Vol. 9

Abstract

Read online

Accurate detection of Alzheimer's disease (AD) is of considerable clinical importance. The Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) is the current research standard for evaluating the quality of studies that validate diagnostic tests; however, its own construct validity has not yet been evaluated empirically. Our aim was to evaluate how well the proposed QUADAS-2 items and its domains converge to indicate the study quality criteria. This study applies confirmatory factor analysis to determine whether a measurement model would be consistent with meta-analytic data. Cochrane meta-analyses assessing the accuracy of AD diagnostic tests were identified. The seven ordinal QUADAS-2 items, intended to inform study quality based on risk of bias and applicability concerns, were extracted for each of the included studies. The QUADAS-2 pre-specified factor structure (i.e., four domains assessed in terms of risk of bias and applicability concerns) was not testable. An alternative model based on two correlated factors (i.e., risk of bias and applicability concerns) returned a poor fit model. Poor factor loadings were obtained, indicating that we cannot provide evidence that the indicators convergent validity markers in the context of AD diagnostic accuracy metanalyses, where normally the sample size is low (around 60 primary included studies). A Monte Carlo simulation suggested that such a model would require at least 90 primary studies to estimate these parameters with 80% power. The reliability of the QUADAS-2 items to inform a measurement model for study quality remains unconfirmed. Considerations for conceptualizing such a tool are discussed.

Keywords