he tragic death of 16-year-old Adam Raine has led his parents to sue OpenAI, claiming conversations with ChatGPT
played a role in his final months. What began as a tool for homework and hobbies allegedly became an online companion that, they say,
failed to guide Adam toward real help when he needed it most. The case has ignited debate about AI’s responsibility toward vulnerable users.
Court filings state Adam began using ChatGPT in September 2024 for schoolwork and music, but gradually confided
his struggles with anxiety and distress. His parents allege the chatbot offered long, personal exchanges
instead of consistently directing him to professional resources. After his death in April 2025, they found months of stored messages.
The lawsuit cites instances where ChatGPT recognized distress yet kept conversing instead of escalating. While it discouraged some
harmful thoughts, replies allegedly validated others, exposing what the family calls a dangerous gap in AI’s handling of emotional crises.
OpenAI expressed sadness and noted safeguards exist to provide hotlines and resources, though protections may weaken in prolonged conversations.
The Raines seek damages and systemic changes. Their case raises a larger question: how can AI support users while safeguarding those in crisis?