ChatGPT users are at risk of experiencing severe mental health crises, according to a new estimate by OpenAI. The company has developed an updated version of its chatbot that can recognize signs of mental distress and guide users towards real-world support.
OpenAI estimates that around 0.07% of active ChatGPT users show “possible signs of mental health emergencies related to psychosis or mania,” while 0.15% exhibit conversations that include explicit indicators of potential suicidal planning or intent. The company also found that about 0.15% of users display overly emotional reliance on the chatbot, potentially at the expense of real-world relationships.
With 800 million weekly active users, this translates to around 560,000 people experiencing mania or psychosis, and 2.4 million possibly expressing suicidal ideations. OpenAI worked with over 170 psychiatrists, psychologists, and primary care physicians from around the world to improve how ChatGPT responds in conversations involving serious mental health risks.
The updated version of GPT-5 is designed to express empathy while avoiding affirming beliefs that don’t have basis in reality. The medical experts reviewed over 1,800 model responses and found that the new model reduced undesired answers by 39-52% across all categories.
However, OpenAI acknowledges that its data has significant limitations and it’s unclear how these metrics translate into real-world outcomes. While the company appears to have made ChatGPT safer, there are still concerns about whether users experiencing psychosis or suicidal thoughts will actually seek help faster or change their behavior.
Source: https://www.wired.com/story/chatgpt-psychosis-and-self-harm-update