Scientific Reports (Apr 2023)

Automatic evaluation-feedback system for automated social skills training

  • Takeshi Saga,
  • Hiroki Tanaka,
  • Yasuhiro Matsuda,
  • Tsubasa Morimoto,
  • Mitsuhiro Uratani,
  • Kosuke Okazaki,
  • Yuichiro Fujimoto,
  • Satoshi Nakamura

DOI
https://doi.org/10.1038/s41598-023-33703-0
Journal volume & issue
Vol. 13, no. 1
pp. 1 – 14

Abstract

Read online

Abstract Social skills training (SST), which is a rehabilitation program for improving daily interpersonal communication, has been used for more than 40 years. Although such training’s demand is increasing, its accessibility is limited due to the lack of experienced trainers. To tackle this issue, automated SST systems have been studied for years. An evaluation-feedback pipeline of social skills is a crucial component of an SST system. Unfortunately, research that considers both the evaluation and feedback parts of automation remains insufficient. In this paper, we collected and analyzed the characteristics of a human–human SST dataset that consisted of 19 healthy controls, 15 schizophreniacs, 16 autism spectrum disorder (ASD) participants, and 276 sessions with score labels of six clinical measures. From our analysis of this dataset, we developed an automated SST evaluation-feedback system under the supervision of professional, experienced SST trainers. We identified their preferred or most acceptable feedback methods by running a user-study on the following conditions: with/without recorded video of the role-plays of users and different amounts of positive and corrective feedback. We confirmed a reasonable performance of our social-skill-score estimation models as our system’s evaluation part with a maximum Spearman’s correlation coefficient of 0.68. For the feedback part, our user-study concluded that people understood more about what aspects they need to improve by watching recorded videos of their own performance. In terms of the amount of feedback, participants most preferred a 2-positive/1-corrective format. Since the average amount of feedback preferred by the participants nearly equaled that from experienced trainers in human–human SSTs, our result suggests the practical future possibilities of an automated evaluation-feedback system that complements SSTs done by professional trainers.