Humanities & Social Sciences Communications (Jul 2023)

Machines that feel: behavioral determinants of attitude towards affect recognition technology—upgrading technology acceptance theory with the mindsponge model

  • Peter Mantello,
  • Manh-Tung Ho,
  • Minh-Hoang Nguyen,
  • Quan-Hoang Vuong

DOI
https://doi.org/10.1057/s41599-023-01837-1
Journal volume & issue
Vol. 10, no. 1
pp. 1 – 16

Abstract

Read online

Abstract The rise of emotional AI signals a new era in human-machine relations where intelligent machines not only feel but also feed on human emotions as statistical fodder with the goal of reshaping our behavior. Unlike many smart technologies, emotion-recognition systems sense, monitor, harvest and analyze data extracted from a person’s non-conscious or psycho-physical state, often without their knowledge or consent. As a far more invasive manner of surveillance capitalism, the technological adoption of emotional AI is problematized by a myriad of legal, ethical, cultural, and scientific issues. To better understand the behavioral factors determining an individual’s attitude towards this emerging technology, we first identify five major tensions that may impinge on adoption. Second, we extend the Technological Acceptance Model (TAM) (Davis, 1989) model with insights from the mindsponge model of information filtering (Vuong and Napier, 2015) along with quantitative affordances offered by the Bayesian computational approach. Our analysis was conducted based on a multi-national dataset surveying perceptions of 1015 young adults (age 18–27) regarding emotional AI applications and their socio-cultural characteristics such as income, region, religiosity, and home country politics. These characteristics are fed into our Bayesian multi-level models as varying intercepts so that we can systematically measure and compare the effects of various behavioral determinants on the attitudes of respondents towards non-conscious data harvesting by government and private sector actors. Critically, this study finds respondents who feel more familiar with, and perceive more utilities in AI technologies, as well as rate themselves as more restrained from heated arguments on social media, feel less threatened by the practice of non-conscious data harvesting by both government and private sector actors. Our findings offer a fertile platform for further exploration of the intersection between psychology, culture, and emotion-recognition technologies as well as important insights for policymakers wishing to ensure design and regulation of the technology serve the best interests of society.