Computers and Education: Artificial Intelligence (Jan 2023)

Assessing student errors in experimentation using artificial intelligence and large language models: A comparative study with human raters

  • Arne Bewersdorff,
  • Kathrin Seßler,
  • Armin Baur,
  • Enkelejda Kasneci,
  • Claudia Nerdel

Journal volume & issue
Vol. 5
p. 100177

Abstract

Read online

Identifying logical errors in complex, incomplete or even contradictory and overall heterogeneous data like students’ experimentation protocols is challenging. Recognizing the limitations of current evaluation methods, we investigate the potential of Large Language Models (LLMs) for automatically identifying student errors and streamlining teacher assessments. Our aim is to provide a foundation for productive, personalized feedback. Using a dataset of 65 student protocols, an Artificial Intelligence (AI) system based on the GPT-3.5 and GPT-4 series was developed and tested against human raters. Our results indicate varying levels of accuracy in error detection between the AI system and human raters. The AI system can accurately identify many fundamental student errors, for instance, the AI system identifies when a student is focusing the hypothesis not on the dependent variable but solely on an expected observation (acc. = 0.90), when a student modifies the trials in an ongoing investigation (acc. = 1), and whether a student is conducting valid test trials (acc. = 0.82) reliably. The identification of other, usually more complex errors, like whether a student conducts a valid control trial (acc. = 0.60), poses a greater challenge. This research explores not only the utility of AI in educational settings, but also contributes to the understanding of the capabilities of LLMs in error detection in inquiry-based learning like experimentation.

Keywords