Education Sciences (Feb 2021)

Assessing Intervention Effects in the Presence of Missing Scores

  • Chao-Ying Joanne Peng,
  • Li-Ting Chen

DOI
https://doi.org/10.3390/educsci11020076
Journal volume & issue
Vol. 11, no. 2
p. 76

Abstract

Read online

Due to repeated observations of an outcome behavior in N-of-1 or single-case design (SCD) intervention studies, the occurrence of missing scores is inevitable in such studies. Approximately 21% of SCD articles published in five reputable journals between 2015 and 2019 exhibited evidence of missing scores. Missing rates varied by designs, with the highest rate (24%) found in multiple baseline/probe designs. Missing scores cause difficulties in data analysis. And inappropriate treatments of missing scores lead to consequences that threaten internal validity and weaken generalizability of intervention effects reported in SCD research. In this paper, we comprehensively review nine methods for treating missing SCD data: the available data method, six single imputations, and two model-based methods. The strengths, weaknesses, assumptions, and examples of these methods are summarized. The available data method and three single imputation methods are further demonstrated in assessing an intervention effect at the class and students’ levels. Assessment results are interpreted in terms of effect sizes, statistical significances, and visual analysis of data. Differences in results among the four methods are noted and discussed. The extensive review of problems caused by missing scores and possible treatments should empower researchers and practitioners to account for missing scores effectively and to support evidence-based interventions vigorously. The paper concludes with a discussion of contingencies for implementing the nine methods and practical strategies for managing missing scores in single-case intervention studies.

Keywords