Computers and Education: Artificial Intelligence (Dec 2024)

Developing and validating measures for AI literacy tests: From self-reported to objective measures

  • Thomas K.F. Chiu,
  • Yifan Chen,
  • King Woon Yau,
  • Ching-sing Chai,
  • Helen Meng,
  • Irwin King,
  • Savio Wong,
  • Yeung Yam

Journal volume & issue
Vol. 7
p. 100282

Abstract

Read online

The majority of AI literacy studies have designed and developed self-reported questionnaires to assess AI learning and understanding. These studies assessed students' perceived AI capability rather than AI literacy because self-perceptions are seldom an accurate account of true measures. International assessment programs that use objective measures to assess science, mathematical, digital, and computational literacy back up this argument. Furthermore, because AI education research is still in its infancy, the current definition of AI literacy in the literature may not meet the needs of young students. Therefore, this study aims to develop and validate an AI literacy test for school students within the interdisciplinary project known as AI4future. Engineering and education researchers created and selected 25 multiple-choice questions to accomplish this goal, and school teachers validated them while developing an AI curriculum for middle schools. 2390 students in grades 7 to 9 took the test. We used a Rasch model to investigate the discrimination, reliability, and validity of the items. The results showed that the model met the unidimensionality assumption and demonstrated a set of reliable and valid items. They indicate the quality of the test items. The test enables AI education researchers and practitioners to appropriately evaluate their AI-related education interventions.

Keywords