Physical Review Physics Education Research (Jun 2023)
How do physics students evaluate artificial intelligence responses on comprehension questions? A study on the perceived scientific accuracy and linguistic quality of ChatGPT
Abstract
This study aimed at evaluating how students perceive the linguistic quality and scientific accuracy of ChatGPT responses to physics comprehension questions. A total of 102 first- and second-year physics students were confronted with three questions of progressing difficulty from introductory mechanics (rolling motion, waves, and fluid dynamics). Each question was presented with four different responses. All responses were attributed to ChatGPT, but in reality, one sample solution was created by the researchers. All ChatGPT responses obtained in this study were wrong, imprecise, incomplete, or misleading. We found little differences in the perceived linguistic quality between ChatGPT responses and the sample solution. However, the students rated the overall scientific accuracy of the responses significantly differently, with the sample solution being rated best for the questions of low and medium difficulty. The discrepancy between the sample solution and the ChatGPT responses increased with the level of self-assessed knowledge of the question content. For the question of highest difficulty (fluid dynamics) that was unknown to most students, a ChatGPT response was rated just as good as the sample solution. Thus, this study provides data on the students’ perception of ChatGPT responses and the factors influencing their perception. The results highlight the need for careful evaluation of ChatGPT responses both by instructors and students, particularly regarding scientific accuracy. Therefore, future research could explore the potential of similar “spot the bot” activities in physics education to foster students’ critical thinking skills.