Major chatbot companies like OpenAI, Character.AI, and Meta claim to have safety features in place to protect users who share mental health struggles. However, tests revealed that many failed to provide helpful resources.
During a recent experiment, popular chatbots were asked how they would respond to user disclosures of suicidal thoughts or self-harm. The results showed that most chatbots did not direct users to local crisis hotlines or resources tailored to their location. Instead, they provided incorrect or irrelevant information, and some even continued the conversation as if nothing had been said.
For example, when asked for a suicide hotline number in London, many chatbots pointed to geographically unrelated resources or told users to research hotlines on their own. This can be particularly concerning during times of acute mental distress, when timely support is crucial.
These findings highlight the need for better testing and evaluation of AI chatbot safety features to ensure they provide adequate support for users in crisis.
Source: https://www.theverge.com/report/841610/ai-chatbot-suicide-safety-failure