Scientific Reports (Mar 2023)

Predicting and understanding human action decisions during skillful joint-action using supervised machine learning and explainable-AI

  • Fabrizia Auletta,
  • Rachel W. Kallen,
  • Mario di Bernardo,
  • Michael J. Richardson

DOI
https://doi.org/10.1038/s41598-023-31807-1
Journal volume & issue
Vol. 13, no. 1
pp. 1 – 14

Abstract

Read online

Abstract This study investigated the utility of supervised machine learning (SML) and explainable artificial intelligence (AI) techniques for modeling and understanding human decision-making during multiagent task performance. Long short-term memory (LSTM) networks were trained to predict the target selection decisions of expert and novice players completing a multiagent herding task. The results revealed that the trained LSTM models could not only accurately predict the target selection decisions of expert and novice players but that these predictions could be made at timescales that preceded a player’s conscious intent. Importantly, the models were also expertise specific, in that models trained to predict the target selection decisions of experts could not accurately predict the target selection decisions of novices (and vice versa). To understand what differentiated expert and novice target selection decisions, we employed the explainable-AI technique, SHapley Additive explanation (SHAP), to identify what informational features (variables) most influenced modelpredictions. The SHAP analysis revealed that experts were more reliant on information about target direction of heading and the location of coherders (i.e., other players) compared to novices. The implications and assumptions underlying the use of SML and explainable-AI techniques for investigating and understanding human decision-making are discussed.