International Journal of Computational Intelligence Systems (Jan 2024)

Prediction of In-Class Performance Based on MFO-ATTENTION-LSTM

  • Xue Qin,
  • Cang Wang,
  • YouShu Yuan,
  • Rui Qi

DOI
https://doi.org/10.1007/s44196-023-00395-3
Journal volume & issue
Vol. 17, no. 1
pp. 1 – 13

Abstract

Read online

Abstract In this paper, we present a novel approach to predicting in-class performance using log data from course learning, which is important in the field of personalized education and classroom management. Specifically, a set of fine-grained features is extracted from unit learning log data to train a prediction model based on long short-term memory (LSTM). However, to enhance the accuracy of the model, we introduce moth flame optimization-attention-LSTM (MFO-Attention-LSTM) as an improvement to the conventional LSTM-attention model. The MFO algorithm is utilized instead of the traditional backward propagation method to calculate attention layer parameters, thereby allowing the model to jump out of local optima. The proposed model outperforms the SVM, CNN, RNN, LSTM, and LSTM-Attention models in terms of the F1 score. Empirical results demonstrate that the optimization of the MFO algorithm contributes significantly to the improved performance of the prediction model. In conclusion, the proposed MFO-Attention-LSTM model offers a promising solution for predicting in-class performance using log data from course learning and could provide valuable insights for personalized education and classroom management.

Keywords