Frontiers in Education (Nov 2022)

Simulating computerized adaptive testing in special education based on inclusive progress monitoring data

  • Nikola Ebenbeck,
  • Markus Gebhardt

DOI
https://doi.org/10.3389/feduc.2022.945733
Journal volume & issue
Vol. 7

Abstract

Read online

IntroductionAdaptive tests have advantages especially for children with special needs but are rarely used in practice. Therefore, we have investigated for our web-based progress-monitoring platform www.levumi.de of how to build adaptive tests based on existing item pools by computerized adaptive testing (CAT). In this study, we explore the requirements of item pools and necessary settings of computerized adaptive testing in special education and inclusion in order to achieve both short test length and good test accuracy.MethodsWe used existing items fitted to the Rasch model and data samples of progress monitoring tests (N = 681) for mathematics and reading to create two item pools for adaptive testing. In a simulation study (N = 4,000), we compared different test lengths and test accuracies as stopping rules with regard to an inclusive use of adaptive testing.ResultsThe results show an optimal maximum test length of 37 and 24 items, with a target standard error for accuracy of 0.5. These results correspond to an average execution time of about 3 min per test.DiscussionThe results are discussed in terms of the use of adaptive testing in inclusive settings and the applicability of such adaptive tests as screenings, focusing mainly on students with special needs in learning, language, or behavior.

Keywords