IEEE Access (Jan 2023)

Analysis of Facial Expressions to Estimate the Level of Engagement in Online Lectures

  • Renjun Miao,
  • Haruka Kato,
  • Yasuhiro Hatori,
  • Yoshiyuki Sato,
  • Satoshi Shioiri

DOI
https://doi.org/10.1109/ACCESS.2023.3297651
Journal volume & issue
Vol. 11
pp. 76551 – 76562

Abstract

Read online

The present study aimed to develop a method for estimating students’ attentional state from facial expressions during online lectures. We estimated the level of attention while students watched a video lecture by measuring reaction time (RT) to detect a target sound that was irrelevant to the lecture. We assumed that RT to such a stimulus would be longer when participants were focusing on the lecture compared with when they were not. We sought to estimate how much learners focus on a lecture using RT measurement. In the experiment, the learner’s face was recorded by a video camera while watching a video lecture. Facial features were analyzed to predict RT to a task-irrelevant stimulus, which was assumed to be an index of the level of attention. We applied a machine learning method, light Gradient Boosting Machine (LightGBM), to estimate RTs from facial features extracted as action units (AUs) corresponding to facial muscle movements by an open-source software (OpenFace). The model obtained using LightGBM indicated that RTs to the irrelevant stimuli can be estimated from AUs, suggesting that facial expressions are useful for predicting attentional states while watching lectures. We re-analyzed the data while excluding RT data with sleepy faces of the students to test whether decreased general arousal caused by sleepiness was a significant factor in the RT lengthening observed in the experiment. The results were similar regardless of the inclusion of RTs with sleepy faces, indicating that facial expression can be used to predict learners’ level of attention to video lectures.

Keywords