Advances in Medical Education and Practice (Jul 2024)

Leveraging Narrative Feedback in Programmatic Assessment: The Potential of Automated Text Analysis to Support Coaching and Decision-Making in Programmatic Assessment

  • Nair BR,
  • Moonen - van Loon JMW,
  • van Lierop M,
  • Govaerts M

Journal volume & issue
Vol. Volume 15
pp. 671 – 683

Abstract

Read online

Balakrishnan R Nair,1 Joyce MW Moonen - van Loon,2 Marion van Lierop,3 Marjan Govaerts2 1University of Newcastle, Centre for Medical Professional Development, Newcastle, Australia; 2School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands; 3Department of Family Medicine, Faculty of Health Medicine and Life Sciences, Maastricht University, Maastricht, the NetherlandsCorrespondence: Joyce MW Moonen - van Loon; Balakrishnan R Nair, Email [email protected]; [email protected]: Current assessment approaches increasingly use narratives to support learning, coaching and high-stakes decision-making. Interpretation of narratives, however, can be challenging and time-consuming, potentially resulting in suboptimal or inadequate use of assessment data. Support for learners, coaches as well as decision-makers in the use and interpretation of these narratives therefore seems essential.Methods: We explored the utility of automated text analysis techniques to support interpretation of narrative assessment data, collected across 926 clinical assessments of 80 trainees, in an International Medical Graduates’ licensing program in Australia. We employed topic modelling and sentiment analysis techniques to automatically identify predominant feedback themes as well as the sentiment polarity of feedback messages. We furthermore sought to examine the associations between feedback polarity, numerical performance scores, and overall judgments about task performance.Results: Topic modelling yielded three distinctive feedback themes: Medical Skills, Knowledge, and Communication & Professionalism. The volume of feedback varied across topics and clinical settings, but assessors used more words when providing feedback to trainees who did not meet competence standards. Although sentiment polarity and performance scores did not seem to correlate at the level of single assessments, findings showed a strong positive correlation between the average performance scores and the average algorithmically assigned sentiment polarity.Discussion: This study shows that use of automated text analysis techniques can pave the way for a more efficient, structured, and meaningful learning, coaching, and assessment experience for learners, coaches and decision-makers alike. When used appropriately, these techniques may facilitate more meaningful and in-depth conversations about assessment data, by supporting stakeholders in interpretation of large amounts of feedback. Future research is vital to fully unlock the potential of automated text analysis, to support meaningful integration into educational practices.Keywords: programmatic assessment, narrative feedback, learning analytics, text mining, international medical graduates

Keywords