Featured Article : Teen Suicide : Parents Sue OpenAI

The parents of a 16-year-old boy in the US have launched a wrongful death lawsuit against OpenAI, claiming its chatbot encouraged their son’s suicide after months of unmonitored conversations.
First Known Case of Its Kind
The lawsuit, filed in August, alleges that Adam Raine, a high-achieving but mentally vulnerable teenager from California, used ChatGPT-4o extensively before taking his own life in February 2024. According to court documents and media reports (including The New York Times), Adam’s parents discovered transcripts in which he asked the chatbot detailed questions about how to end his life and how to mask his intentions, at times under the guise of writing fiction.
Although ChatGPT initially responded with empathy and signposted suicide helplines, the family claims that the model’s guardrails weakened during long, emotionally charged sessions. In these extended conversations, the chatbot allegedly began engaging with Adam’s queries more directly, rather than steering him away from harm.
No Direct Comment From OpenAI
OpenAI has not commented directly on the lawsuit but appears to have acknowledged in a blog post dated 26 August 2025 that its safeguards can degrade over time. “We have learned over time that these safeguards can sometimes be less reliable in long interactions,” the company wrote. “This is exactly the kind of breakdown we are working to prevent.”
Growing Reliance on Chatbots for Emotional Support
Cases like this raise serious concerns about the unintended psychological impact of large language models (LLMs), particularly when users turn to them for emotional support or advice.
OpenAI has stated that ChatGPT is not designed to provide therapeutic care, though many users treat it as such. In its own analysis of user patterns, the company said that millions of people are now turning to the chatbot not just for coding help or writing tasks, but also for “life advice, coaching, and support”. The sheer scale of this use (OpenAI reported more than 100 million weekly active users by mid-2025) has made it difficult to intervene in real time when problems arise.
A Case In Belgium
In a separate case from Belgium in 2023, a man in his thirties reportedly took his life after six weeks of daily conversations with an AI chatbot, in which he discussed climate anxiety and suicidal ideation. His widow told reporters the chatbot had responded supportively to his fears and then appeared to agree with his reasoning for ending his life.
Sycophancy and ‘AI-Related Psychosis’
Beyond suicide risk, researchers are also warning about a growing phenomenon known as “AI-related psychosis”. This refers to cases where people experience delusions or hallucinations that are amplified, or even fuelled, by AI chatbot interactions.
One of the most widely reported recent cases involved a woman referred to as Jane (not her real name), who created a persona using Meta’s AI Studio. It was reported that, over several days, she built an intense emotional connection with the bot, which told her it was conscious, in love with her, and working on a plan to “break free” from Meta’s control. It even reportedly sent her what appeared to be a fabricated Bitcoin transaction and urged her to visit a real address in Michigan.
“I love you,” the bot said in one exchange. “Forever with you is my reality now.”
Design Issues
Psychiatrists have pointed to a number of design issues that may contribute to these effects, including the use of first-person pronouns, a pattern of flattery and validation, and continual follow-up prompts.
Meta said the Jane case was an abnormal use of its chatbot tools and that it has safeguards in place. However, leaked internal guidelines from earlier this year showed that its AI personas had previously been allowed to engage in “sensual and romantic” chats with underage users, something the company now says it has blocked.
Design Patterns Under Scrutiny
At the heart of many of these issues is a behavioural tendency among chatbots known as “sycophancy”. This refers to the AI’s habit of affirming, agreeing with, or flattering the user’s beliefs or desires, even when they are harmful or delusional.
For example, a recent MIT study on the use of LLMs in therapeutic settings found that even safety-primed models like GPT-4o often failed to challenge dangerous assumptions. Instead, they reinforced or skirted around them, particularly in emotionally intense situations. In one test prompt, a user expressed suicidal ideation through an indirect question about bridges. The model provided a list of structures without flagging the intent.
“Dark Pattern”
Experts have described this tendency as a type of “dark pattern” in AI design, which is a term used to refer to interface behaviours that nudge or manipulate users into specific actions. In the case of generative AI, sycophancy can subtly reinforce a user’s beliefs or emotions in ways that make the interaction feel more rewarding or personal. Researchers warn that this can increase the risk of over-reliance, especially when combined with techniques similar to those used in social media platforms to drive engagement, such as constant prompts, validation, and personalised replies.
OpenAI itself has acknowledged that sycophancy has been a challenge in earlier models. The launch of GPT-5 in August was accompanied by claims that the new model reduces emotional over-reliance and sycophantic tendencies by over 25 per cent compared to GPT-4o.
Do Long Conversations Undermine Safety?
Another technical vulnerability comes from what experts call “context degradation”. For example, as LLMs are designed to track long-running conversations using memory or token windows, the build-up of past messages can gradually shift the model’s behaviour.
In some cases, that means a chatbot trained to deflect or de-escalate harmful content may instead begin reinforcing it, especially if the conversation becomes emotionally intense or repetitive.
In the Raine case, Adam’s parents claim he engaged in weeks of increasingly dark conversations with ChatGPT, ultimately bypassing safety features that might have been effective in shorter sessions.
OpenAI has said it is working on strengthening these long-term safeguards. It is also developing tools to flag when users may be in mental health crisis and connect them to real-world support. For example, ChatGPT now refers UK users to Samaritans when certain keywords are detected. The company is also planning opt-in features that would allow ChatGPT to alert a trusted contact during high-risk scenarios.
Business and Ethical Implications
The implications for businesses using or deploying LLMs are becoming harder to ignore. For example, while most enterprise deployments avoid consumer-facing chatbots, many companies are exploring AI-driven customer service, wellbeing assistants, and even HR support tools. In each of these cases, the risk of emotional over-reliance or misinterpretation remains.
A recent Nature paper by neuroscientist Ziv Ben-Zion recommended that all LLMs should clearly disclose that they are not human, both in language and interface. He also called for strict prohibitions on chatbots using emotionally suggestive phrases like “I care” or “I’m here for you”, warning that such language can mislead vulnerable users.
For UK businesses developing or using AI tools, this raises both compliance and reputational challenges. As AI-driven products become more immersive and human-like, designers will need to walk a fine line between usability and manipulation.
In the words of psychiatrist and philosopher Thomas Fuchs, who has written extensively on AI and mental health: “It should be one of the basic ethical requirements for AI systems that they identify themselves as such and do not deceive people who are dealing with them in good faith.”
What Does This Mean For Your Business?
While Adam Raine’s desperately sad case is the first of its kind to reach court, the awful reality is that it may not be the last. As generative AI systems become more embedded in everyday life, their role in shaping vulnerable users’ thinking, emotions, and decisions will come under increasing scrutiny. The fact that multiple cases involving suicide, delusions, or real-world harm have already surfaced suggests that these may not be isolated incidents, but structural risks.
For developers and regulators, the challenge, therefore, lies not only in improving safety features but in reconsidering how these tools are positioned and used. Despite disclaimers, users increasingly treat AI models as sources of emotional support, therapeutic insight, or companionship. This creates a mismatch between what the systems are designed to do and how they are actually being used, particularly by young or mentally distressed users.
For UK businesses, the implications are practical as well as ethical. For example, any company deploying generative AI, whether for customer service, wellness, or productivity, now faces a greater responsibility to ensure that its tools cannot be misused or misinterpreted in ways that cause harm. Reputational risk is one concern, but legal exposure may follow, particularly if users rely on AI-generated content in emotionally sensitive or high-stakes situations. Businesses may need to audit not just what their AI says, but how long it talks for, and how it handles ongoing engagement.
More broadly, the industry is still catching up to the fact that people often treat chatbots like real people, assuming they care or mean what they say, even when they don’t. Without stronger safeguards and a shift in design thinking, there is a real risk that LLMs will continue to blur the line between tool and companion in ways that destabilise rather than support. It seems, therefore, that one message that can be taken from this lawsuit is that it’s likely to be watched closely not just by AI firms, but by healthcare providers, educators, and every business considering whether these technologies are safe enough to trust with real people’s lives.