Information (Oct 2023)

Automated Assessment of Comprehension Strategies from Self-Explanations Using LLMs

  • Bogdan Nicula,
  • Mihai Dascalu,
  • Tracy Arner,
  • Renu Balyan,
  • Danielle S. McNamara

DOI
https://doi.org/10.3390/info14100567
Journal volume & issue
Vol. 14, no. 10
p. 567

Abstract

Read online

Text comprehension is an essential skill in today’s information-rich world, and self-explanation practice helps students improve their understanding of complex texts. This study was centered on leveraging open-source Large Language Models (LLMs), specifically FLAN-T5, to automatically assess the comprehension strategies employed by readers while understanding Science, Technology, Engineering, and Mathematics (STEM) texts. The experiments relied on a corpus of three datasets (N = 11,833) with self-explanations annotated on 4 dimensions: 3 comprehension strategies (i.e., bridging, elaboration, and paraphrasing) and overall quality. Besides FLAN-T5, we also considered GPT3.5-turbo to establish a stronger baseline. Our experiments indicated that the performance improved with fine-tuning, having a larger LLM model, and providing examples via the prompt. Our best model considered a pretrained FLAN-T5 XXL model and obtained a weighted F1-score of 0.721, surpassing the 0.699 F1-score previously obtained using smaller models (i.e., RoBERTa).

Keywords