PLoS ONE (Jan 2021)
Estimating measurement error in child language assessments administered by daycare educators in large scale intervention studies.
Abstract
Measurement error is a ubiquitous element of social science studies. In large-scale effectiveness intervention studies on child language, administration of the assessment of language and preliteracy outcomes by speech and language pathologists is costly in money and human resources. Alternatively, daycare educators can administer the assessment, which preserves considerable resources but may increase the measurement error. Using data from two nationwide child language intervention studies in Denmark, this article evaluates daycare educators' measurement error when administering a test of language and preliteracy skills of 3 to 5 year old children that in part is used in a national screening program. Since children were randomly assigned to educators, hierarchical linear models can estimate the amount of additional measurement error caused by educators' language assessments. The result shows that the amount of additional measurement error varied between different language subscales, ranging from 4% to 19%, which can be compensated for by increasing the sample size by the latter percentage. The benefits and risks of having daycare educators administer language assessments are discussed.