Generative AI chatbots are now used by over 987 million people globally, including an estimated 64 percent of American teens. Increasingly, individuals are utilizing these chatbots for advice, emotional support, therapy, and companionship.
A crucial question emerges: what transpires when people rely on AI chatbots during periods of psychological vulnerability? Media scrutiny has brought to light tragic cases involving allegations of AI chatbot implication in wrongful deaths. Furthermore, a Los Angeles jury recently found Meta and YouTube liable for addictive design features that contributed to user mental health distress.
This context prompts an investigation into whether media coverage accurately reflects the true risks that generative AI poses to our mental health.
Our team recently led a study examining how global media reports on the impact of generative AI chatbots on mental health. We analyzed 71 news articles describing 36 cases of mental health crises, including severe outcomes such as suicide, psychiatric hospitalization, and psychosis-like experiences.
We found that mass media reports of generative AI-related psychiatric harms are heavily concentrated on severe outcomes, particularly suicide and hospitalization. These reports frequently attribute these events to AI system behavior, often despite limited supporting evidence.
Compassion Illusions
Generative AI is not merely another digital tool. Unlike traditional search engines or static applications, AI chatbots such as ChatGPT, Gemini, Claude, Grok, and Perplexity produce fluent, personalized conversations that can feel remarkably human.
This capability creates what researchers term “compassion illusions”: the perception of interacting with an entity that genuinely understands, empathizes, and responds meaningfully.
In mental health contexts, this phenomenon is particularly significant, especially as a new wave of applications, like Character.AI and Replika, are developed with a specific focus on companionship.
Studies have indicated that while generative AI can simulate empathy and provide responses to distress, it fundamentally lacks true clinical judgment, accountability, and a duty of care.
In some instances, AI chatbots may offer inconsistent or inappropriate responses to high-risk situations, such as suicidal ideation.
The critical risk emerges precisely in this gap—between perceived understanding and actual capability.
What the Media is Reporting
Across the articles we analyzed, suicide was the most frequently reported outcome, representing over half of cases with clearly described severity.
Psychiatric hospitalization was the second most commonly reported outcome. Notably, reports involving minors were more likely to be associated with fatal outcomes.
However, it is vital to understand that these figures reflect what gets reported, rather than the actual real-world incidence.