Digital Health (Jul 2025)
“Shaping ChatGPT into my Digital Therapist”: A thematic analysis of social media discourse on using generative artificial intelligence for mental health
Abstract
Objective Generative artificial intelligence (genAI) has become popular for the general public to address mental health needs despite the lack of regulatory oversight. Our study used a digital ethnographic approach to understand the perspectives of individuals who engaged with a genAI tool, ChatGPT, for psychotherapeutic purposes. Methods We systematically collected and analyzed all Reddit posts from January 2024 containing the keywords “ChatGPT” and “therapy” in English. Using thematic analysis, we examined users’ therapeutic intentions, patterns of engagement, and perceptions of both the appealing and unappealing aspects of using ChatGPT for mental health needs. Results Our findings showed that users utilized ChatGPT to manage mental health problems, seek self-discovery, obtain companionship, and gain mental health literacy. Engagement patterns included using ChatGPT to simulate a therapist, coaching its responses, seeking guidance, re-enacting distressing events, externalizing thoughts, assisting real-life therapy, and disclosing personal secrets. Users found ChatGPT appealing due to perceived therapist-like qualities (e.g. emotional support, accurate understanding, and constructive feedback) and machine-like benefits (e.g. constant availability, expansive cognitive capacity, lack of negative reactions, and perceived objectivity). Concerns regarding privacy, emotional depth, and long-term growth were raised but rather infrequently. Conclusion Our findings highlighted how users exercised agency to co-create digital therapeutic spaces with genAI for mental health needs. Users developed varied internal representations of genAI, suggesting the tendency to cultivate mental relationships during the self-help process. The positive, and sometimes idealized, perceptions of genAI as objective, empathic, effective, and free from negativity pointed to both its therapeutic potential and risks that call for AI literacy and increased ethical awareness among the general public. We conclude with several research, clinical, ethical, and policy recommendations.