Computers and Education: Artificial Intelligence (Dec 2024)

Investigating algorithmic bias in student progress monitoring

  • Jamiu Adekunle Idowu,
  • Adriano Soares Koshiyama,
  • Philip Treleaven

Journal volume & issue
Vol. 7
p. 100267

Abstract

Read online

This research investigates bias in AI algorithms used for monitoring student progress, specifically focusing on bias related to age, disability, and gender. The study is motivated by incidents such as the UK A-level grading controversy, which demonstrated the real-world implications of biased algorithms. Using the Open University Learning Analytics Dataset, the research evaluates fairness with metrics like ABROCA, Average Odds Difference, and Equality of Opportunity Difference. The analysis is structured into three experiments. The first experiment examines fairness as an attribute of the data sources and reveals that institutional data is the primary contributor to model discrimination, followed by Virtual Learning Environment data, while assessment data is the least biased. In the second experiment, the research introduces the Optimal Time Index, which pinpoints Day 60 of an average 255-day course as the optimal time for predicting student outcomes, balancing timely interventions, model accuracy, and efficient resource allocation. The third experiment implements bias mitigation strategies throughout the model's life cycle, achieving fairness without compromising accuracy. Finally, this study introduces the Student Progress Card, designed to provide actionable personalized feedback for each student.

Keywords