Internet Interventions (Jun 2024)
Revealing the source: How awareness alters perceptions of AI and human-generated mental health responses
Abstract
In mental health care, the integration of artificial intelligence (AI) into internet interventions could significantly improve scalability and accessibility, provided that AI is perceived as being as effective as human professionals. This longitudinal study investigates the comparative perceptions of ChatGPT and human mental health support professionals across three dimensions: authenticity, professionalism, and practicality. Initially, 140 participants evaluated responses from both sources without knowing their origin, revealing that AI-generated responses were rated significantly higher across all dimensions. Six months later, the same cohort (n = 111) reassessed these messages with the source of each response disclosed, aiming to understand the impact of source transparency on perceptions and trust towards AI. The results indicate a shift in perception towards human responses, only in terms of authenticity (Cohen's d = 0.45) and reveal a significant correlation between trust in AI and its practicality rating (r = 0.25), but not with authenticity or professionalism. A comparative analysis between blind and informed evaluations revealed a significant shift in favour of human response ratings (Cohen's d = 0.42–0.57), while AI response ratings experienced minimal variation. These findings highlight the nuanced acceptance and role of AI in mental health support, emphasizing that the disclosure of the response source significantly shapes perceptions and trust in AI-generated assistance.