NeuroImage (Jul 2023)

Systematic comparisons of different quality control approaches applied to three large pediatric neuroimaging datasets

  • Hajer Nakua,
  • Colin Hawco,
  • Natalie J. Forde,
  • Michael Joseph,
  • Maud Grillet,
  • Delaney Johnson,
  • Grace R. Jacobs,
  • Sean Hill,
  • Aristotle N. Voineskos,
  • Anne L. Wheeler,
  • Meng-Chuan Lai,
  • Peter Szatmari,
  • Stelios Georgiades,
  • Rob Nicolson,
  • Russell Schachar,
  • Jennifer Crosbie,
  • Evdokia Anagnostou,
  • Jason P. Lerch,
  • Paul D. Arnold,
  • Stephanie H. Ameis

Journal volume & issue
Vol. 274
p. 120119

Abstract

Read online

Introduction: Poor quality T1-weighted brain scans systematically affect the calculation of brain measures. Removing the influence of such scans requires identifying and excluding scans with noise and artefacts through a quality control (QC) procedure. While QC is critical for brain imaging analyses, it is not yet clear whether different QC approaches lead to the exclusion of the same participants. Further, the removal of poor-quality scans may unintentionally introduce a sampling bias by excluding the subset of participants who are younger and/or feature greater clinical impairment. This study had two aims: (1) examine whether different QC approaches applied to T1-weighted scans would exclude the same participants, and (2) examine how exclusion of poor-quality scans impacts specific demographic, clinical and brain measure characteristics between excluded and included participants in three large pediatric neuroimaging samples. Methods: We used T1-weighted, resting-state fMRI, demographic and clinical data from the Province of Ontario Neurodevelopmental Disorders network (Aim 1: n = 553, Aim 2: n = 465), the Healthy Brain Network (Aim 1: n = 1051, Aim 2: n = 558), and the Philadelphia Neurodevelopmental Cohort (Aim 1: n = 1087; Aim 2: n = 619). Four different QC approaches were applied to T1-weighted MRI (visual QC, metric QC, automated QC, fMRI-derived QC). We used tetrachoric correlation and inter-rater reliability analyses to examine whether different QC approaches excluded the same participants. We examined differences in age, mental health symptoms, everyday/adaptive functioning, IQ and structural MRI-derived brain indices between participants that were included versus excluded following each QC approach. Results: Dataset-specific findings revealed mixed results with respect to overlap of QC exclusion. However, in POND and HBN, we found a moderate level of overlap between visual and automated QC approaches (rtet=0.52–0.59). Implementation of QC excluded younger participants, and tended to exclude those with lower IQ, and lower everyday/adaptive functioning scores across several approaches in a dataset-specific manner. Across nearly all datasets and QC approaches examined, excluded participants had lower estimates of cortical thickness and subcortical volume, but this effect did not differ by QC approach. Conclusion: The results of this study provide insight into the influence of QC decisions on structural pediatric imaging analyses. While different QC approaches exclude different subsets of participants, the variation of influence of different QC approaches on clinical and brain metrics is minimal in large datasets. Overall, implementation of QC tends to exclude participants who are younger, and those who have more cognitive and functional impairment. Given that automated QC is standardized and can reduce between-study differences, the results of this study support the potential to use automated QC for large pediatric neuroimaging datasets.