Physical Review Physics Education Research (Jul 2025)

Multilingual performance of a multimodal artificial intelligence system on multisubject physics concept inventories

  • Gerd Kortemeyer,
  • Marina Babayeva,
  • Giulia Polverini,
  • Ralf Widenhorn,
  • Bor Gregorcic

DOI
https://doi.org/10.1103/98hg-rkrf
Journal volume & issue
Vol. 21, no. 2
p. 020101

Abstract

Read online Read online

We investigate the multilingual and multimodal performance of a large language model-based artificial intelligence (AI) system, GPT-4o, using a diverse set of physics concept inventories spanning multiple languages and subject categories. The inventories, sourced from the PhysPort website, cover classical physics topics such as mechanics, electromagnetism, optics, and thermodynamics, as well as relativity, quantum mechanics, astronomy, mathematics, and laboratory skills. Unlike previous text-only studies, we uploaded the inventories as images to reflect what a student would see on paper, thereby assessing the system’s multimodal functionality. Our results indicate variation in performance across subjects, with laboratory skills standing out as the weakest. We also observe differences across languages, with English and European languages showing the strongest performance. Notably, the relative difficulty of an inventory item is largely independent of the language of the test. When comparing AI results to existing literature on student performance, we find that the AI system outperforms average postinstruction undergraduate students in all subject categories except laboratory skills. Furthermore, the AI performs worse on items requiring visual interpretation of images than on those that are purely text-based. While our exploratory findings show GPT-4o’s potential usefulness in physics education, they highlight the critical need for instructors to foster students’ ability to critically evaluate AI outputs, adapt curricula thoughtfully in response to AI advancements, and address equity concerns associated with AI integration.