Computers in Human Behavior Reports (Aug 2021)

The design and implementation of the Carolina Automated Reading Evaluation for reading deficit screening

  • William H. Hoskins,
  • William I. Hobbs,
  • Michael J. Eason,
  • Scott Decker,
  • Jijun Tang

Journal volume & issue
Vol. 4
p. 100123

Abstract

Read online

This paper examines the design process and implementation of the Carolina Automated Reading Evaluation (CARE). Designed to automate the process of screening for reading deficits, CARE is an interactive computer-based tool that helps eliminate the need for one-on-one evaluations of pupils to detect dyslexia and other reading deficits. While other tests collect specific data points in order to determine whether a pupil has dyslexia, they typically focus on only a few metrics for diagnosis, such as handwriting analysis or eye tracking. The CARE collects data across up to 16 different subtests, each built to test proficiency in various reading skills. These skills include reading fluency, phoneme manipulation, sound blending, and many other essential skills for reading. This wide variety of measurements allows for a more focused intervention to be created for the pupil. For this study, elementary school pupils were tested both with the CARE and with the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) test, a well-established screener for elementary school-level reading deficits. The data collected was used in a comparison of average scores, a gradient boosting tree classifier, and a convergence test to determine if there was any correlation between the CARE and DIBELS scores. Based on these comparisons, a correspondence was found between the two tests, showing that the CARE can detect reading deficits comparable to manually administered testing instruments. Based on these findings, the CARE could be used as a replacement for current tests, giving the users more detailed data at a faster rate. This paper reviews the technical development and preliminary analysis of the CARE. It also provides insights into key considerations when translating standard psychological screeners onto computerized platforms.

Keywords