Applied Sciences (Sep 2024)

Using Machine Learning to Calibrate Automated Performance Assessment in a Virtual Laboratory: Exploring the Trade-Off between Accuracy and Explainability

  • Vasilis Zafeiropoulos,
  • Dimitris Kalles

DOI
https://doi.org/10.3390/app14177944
Journal volume & issue
Vol. 14, no. 17
p. 7944

Abstract

Read online

Hellenic Open University has been developing Onlabs, a virtual biology laboratory simulating its on-site laboratory, for its students to be trained before the on-site learning activities. The evaluation of user performance in Onlabs is based on a scoring algorithm, which admits some optimization by means of Genetic Algorithms and Artificial Neural Networks. Moreover, for a particular experimental procedure (microscoping), we have experimented with incorporating into it some background knowledge about the procedure, which allows one to break it down in a series of conceptually linked steps in a hierarchical fashion. In this work, we review the flat and hierarchical modes used for the calibration of the automated assessment mechanism and offer an experimental comparison of both approaches with the aim of devising automated scoring schemes which are fit for training in an at-a-distance learning context. Overall, the genetic algorithm fails to deliver good convergence results in the non-hierarchical setting but performs better in the hierarchical one. On the other hand, the neural network most of the time converges, with the non-hierarchical network achieving a slightly better convergence than the hierarchical one, with the latter, however, delivering a smoother and more realistic assessment mechanism.

Keywords