Heliyon (Aug 2024)

Emotional AI in education and toys: Investigating moral risk awareness in the acceptance of AI technologies from a cross-sectional survey of the Japanese population

  • Manh-Tung Ho,
  • Peter Mantello,
  • Quan-Hoang Vuong

Journal volume & issue
Vol. 10, no. 16
p. e36251

Abstract

Read online

Emotional artificial intelligence (AI), i.e., affective computing technologies, is rapidly reshaping the education of young minds worldwide. In Japan, government and commercial stakeholders are promulgating emotional AI not only as a neoliberal, cost-saving benefit but also as a heuristic that can improve the learning experience at home and in the classroom. Nevertheless, critics warn of a myriad of risks and harms posed by the technology such as privacy violation, unresolved deeper cultural and systemic issues, machinic parentalism as well as the danger of imposing attitudinal conformity. This study brings together the Technological Acceptance Model and Moral Foundation Theory to examine the cultural construal of risks and rewards regarding the application of emotional AI technologies. It explores Japanese citizens’ perceptions of emotional AI in education and children's toys via analysis of a final sample of 2000 Japanese respondents with five age groups (20s–60s) and two sexes equally represented. The linear regression models for determinants of attitude toward emotional AI in education and in toys account for 44 % and 38 % variation in the data, respectively. The analyses reveal a significant negative correlation between attitudes toward emotional AI in both schools and toys and concerns about privacy violations or the dystopian nature of constantly monitoring of children and students’ emotions with AI (Education: βDystopianConcern = − .094***; Toys: βPrivacyConcern = − .199***). However, worries about autonomy and bias show mixed results, which hints at certain cultural nuances of values in a Japanese context and how new the technologies are. Concurring with the empirical literature on the Moral Foundation Theory, the chi-square (Χ2) test shows Japanese female respondents express more fear regarding the potential harms of emotional AI technologies for the youth's privacy, autonomy, data misuse, and fairness (p < 0.001). The policy implications of these results and insights on the impacts of emotional AI for the future of human-machine interaction are also provided.

Keywords