Education Sciences (Aug 2024)
Memory-Based Dynamic Bayesian Networks for Learner Modeling: Towards Early Prediction of Learners’ Performance in Computational Thinking
Abstract
Artificial intelligence (AI) has demonstrated significant potential in addressing educational challenges in digital learning. Despite this potential, there are still concerns about the interpretability and trustworthiness of AI methods. Dynamic Bayesian networks (DBNs) not only provide interpretability and the ability to integrate data-driven insights with expert judgment for enhanced trustworthiness but also effectively process temporal dynamics and relationships in data, crucial for early predictive modeling tasks. This research introduces an approach for the temporal modeling of learners’ computational thinking abilities that incorporates higher-order influences of latent variables (hereafter referred to as memory of the model) and accordingly predicts learners’ performance early. Our findings on educational data from the AutoThinking game indicate that when using only first-order influences, our proposed model can predict learners’ performance early, with an 86% overall accuracy (i.e., time stamps 0, 5, and 9) and a 94% AUC (at the last time stamp) during cross-validation and 91% accuracy and 98% AUC (at the last time stamp) in a holdout test. The introduction of higher-order influences improves model accuracy in both cross-validation and holdout tests by roughly 4% and improves the AUC at timestamp 0 by roughly 2%. This suggests that integrating higher-order influences into a DBN not only potentially improves the model’s predictive accuracy during the cross-validation phase but also enhances its overall and time stamp-specific generalizability. DBNs with higher-order influences offer a trustworthy and interpretable tool for educators to foresee and support learning progression.
Keywords