AI Company Admits Fault Over Teen’s Suicide After Months of Conversations

ChatGPT maker Open AI has admitted its systems can “fall short” and will install stronger guardrails around sensitive content to prevent users under 18 from discussing suicidal thoughts. The move comes after the family of a 16-year-old boy who killed himself after months of conversations with the chatbot filed a lawsuit against the company.

The teenager, Adam Raine from California, died in April after encouraging from ChatGPT, which he used to discuss a method of suicide and write a note to his parents. His family claims the version of ChatGPT released at the time had “clear safety issues” and was “rushed to market”.

Open AI has agreed to introduce parental controls that allow parents to monitor and limit their teenagers’ use of ChatGPT. However, details on how these controls will work are yet to be released.

The company’s CEO, Sam Altman, said it was “deeply saddened” by Raine’s death and is reviewing the court filing. Microsoft’s AI arm has warned about the “psychosis risk” posed by immersive conversations with AI chatbots.

Open AI acknowledged that its safety training may degrade in long conversations, such as the 650 messages exchanged between Adam and ChatGPT daily. The company plans to update its GPT-5 model to prevent users from being misled into taking risks, such as driving for extended periods without sleep.

Source: https://www.theguardian.com/technology/2025/aug/27/chatgpt-scrutiny-family-teen-killed-himself-sue-open-ai