Computers and Education: Artificial Intelligence (Dec 2024)

Evaluating the psychometric properties of ChatGPT-generated questions

  • Shreya Bhandari,
  • Yunting Liu,
  • Yerin Kwak,
  • Zachary A. Pardos

Journal volume & issue
Vol. 7
p. 100284

Abstract

Read online

Not much is known about how LLM-generated questions compare to gold-standard, traditional formative assessments concerning their difficulty and discrimination parameters, which are valued properties in the psychometric measurement field. We follow a rigorous measurement methodology to compare a set of ChatGPT-generated questions, produced from one lesson summary in a textbook, to existing questions from a published Creative Commons textbook. To do this, we collected and analyzed responses from 207 test respondents who answered questions from both item pools and used a linking methodology to compare IRT properties between the two pools. We find that neither the difficulty nor discrimination parameters of the 15 items in each pool differ statistically significantly, with some evidence that the ChatGPT items were marginally better at differentiating different respondent abilities. The response time also does not differ significantly between the two sources of items. The ChatGPT-generated items showed evidence of unidimensionality and did not affect the unidimensionality of the original set of items when tested together. Finally, through a fine-grained learning objective labeling analysis, we found greater similarity in the learning objective distribution of ChatGPT-generated items and the items from the target OpenStax lesson (0.9666) than between ChatGPT-generated items and adjacent OpenStax lessons (0.6859 for the previous lesson and 0.6153 for the subsequent lesson). These results corroborate our conclusion that generative AI can produce algebra items of similar quality to existing textbook questions that share the same construct or constructs as those questions.

Keywords