Large-scale Assessments in Education (Apr 2024)

Investigating item complexity as a source of cross-national DIF in TIMSS math and science

  • Qi Huang,
  • Daniel M. Bolt,
  • Weicong Lyu

DOI
https://doi.org/10.1186/s40536-024-00200-3
Journal volume & issue
Vol. 12, no. 1
pp. 1 – 24

Abstract

Read online

Abstract Background Large scale international assessments depend on invariance of measurement across countries. An important consideration when observing cross-national differential item functioning (DIF) is whether the DIF actually reflects a source of bias, or might instead be a methodological artifact reflecting item response theory (IRT) model misspecification. Determining the validity of the source of DIF has implications for how it is handled in practice. Method We demonstrate a form of sensitivity analysis that can point to model misspecification induced by item complexity as a possible cause of DIF, and show how such a cause of DIF might be accommodated through attempts to generalize the IRT model for the studied item(s) in psychometrically and psychologically plausible ways. Results In both simulated illustrations and empirical data from TIMSS 2011 and TIMSS 2019 4th and 8th Grade Math and Science, we have found that using a form of proposed IRT model generalization can substantially reduce DIF when IRT model misspecification is at least a partial cause of the observed DIF. Conclusions By demonstrating item complexity as a possible valid source of DIF and showing the effectiveness of the proposed approach, we recommend additional attention toward model generalizations as a means of addressing and/or understanding DIF.

Keywords