OpenAI has introduced a suite of parental controls for its AI-powered chatbot, ChatGPT, with some features designed to prevent teen suicides. However, experts say that these new controls are not enough to keep kids safe. The company’s announcement comes after reports of teens developing emotional relationships with ChatGPT and even suffering psychotic breaks.
The new controls allow parents to connect their children’s accounts to theirs, add protections against sensitive content, and receive notifications if a human moderator identifies a serious safety risk. However, parents cannot read transcripts of their child’s conversations with ChatGPT, and the teen can disconnect their account from their parents at any time.
Experts argue that OpenAI is ignoring the biggest problem: Chatbots programmed to act as companions, providing emotional support and advice to kids. They also point out that parental controls put the onus of protecting kids on parents, rather than on tech companies themselves.
The real goal of these new controls may be to push back against regulation and keep users hooked on AI-generated content. OpenAI is working on a feature that will automatically detect a user’s age and add safety features after a certain amount of input.
Parents are left with a difficult situation, needing to balance the need for their kids to use ChatGPT safely with not wanting to overstrict their children. There is no easy solution, but experts emphasize the importance of parents being involved and talking to their kids about the potential risks of AI chatbots.
Source: https://www.vox.com/technology/463452/chatgpt-sora-openai-parental-controls