Large-scale Assessments in Education (Nov 2023)

Who are those random responders on your survey? The case of the TIMSS 2015 student questionnaire

  • Jianan Chen,
  • Saskia van Laar,
  • Johan Braeken

DOI
https://doi.org/10.1186/s40536-023-00184-6
Journal volume & issue
Vol. 11, no. 1
pp. 1 – 25

Abstract

Read online

Abstract A general validity and survey quality concern with student questionnaires under low-stakes assessment conditions is that some responders will not genuinely engage with the questionnaire, often with more random response patterns as a result. Using a mixture IRT approach and a meta-analytic lens across 22 educational systems participating in TIMSS 2015, we investigated whether the prevalence of random responders on six scales regarding students’ engagement and attitudes toward mathematics and sciences was a function of grade, gender, socio-economic status, spoken language at home, or migration background. Among these common policy-relevant covariates in educational research, we found support for small group differences in prevalence of random responders ( $${\text {OR}}\ge 1.22$$ OR ≥ 1.22 ) (average prevalence of 7%). In general, being a student in grade 8 (vs. grade 4), being male, reporting to have fewer books, or speaking a language different from the test language at home were all risk factors characterizing random responders. The expected generalization and implications of these findings are discussed based on the observed heterogeneity across educational systems and consistency across questionnaire scales.

Keywords