IEEE Access (Jan 2023)

The Role of the Eyes: Investigating Face Cognition Mechanisms Using Machine Learning and Partial Face Stimuli

  • Ingon Chanpornpakdi,
  • Toshihisa Tanaka

DOI
https://doi.org/10.1109/ACCESS.2023.3295118
Journal volume & issue
Vol. 11
pp. 86122 – 86131

Abstract

Read online

Face cognition mechanism has changed throughout the SARS-CoV-2 pandemic because of wearing masks. Previous studies found that holistic face processing enhances face cognition ability, and covering part of the face features lowers such an ability. However, the question of why people can recognize faces regardless of missing some clues about the face feature remains unsolved. To study the face cognition mechanism, event-related potential (ERP) evoked during the rapid serial visual presentation task is used. ERP is often hidden under large artifacts and needs to be averaged across the tremendous number of trials, but increasing the trial number can cause fatigue and affect evoked ERP. To overcome this limitation, we adopt machine learning and aim to investigate the partial face cognition mechanism without directly considering the pattern characteristic of the ERP. We implemented an xDAWN spatial filter covariance matrix method to enhance the data quality and a support vector machine classification model to predict the participant’s event of interest using ERP components evoked in the full and partial face cognition tasks. The combination of the missing two face components and the physical response was also investigated to explore the role of each face component and find the possibility of reducing fatigue caused during the experiment. Our results show that the classification accuracy decreased when the eye component was missing and became lowest $(p < 0.005)$ when the eyes and mouth were absent, with an accuracy of 0.748 ± 0.092 in the button press task and 0.746 ± 0.084 in the no button press task (n.s.). We also observed that the button press error rate increased when the eyes were absent and reached its maximum when the eyes and mouth were covered $(p < 0.05)$ . These results suggest that the eyes might be the most effective component, the mouth might also play a secondary role in face cognition, and no button press task could be used in substitution of a button press task to reduce the workload.

Keywords