Meta Introduces Enhanced Safety Features for Teen Users

Meta has introduced new safety features aimed at protecting teen users on Instagram and Facebook. The measures include enhanced direct messaging protections to prevent “exploitative content” and provide more information about who users are chatting with.

The company removed over 1.1 million accounts linked to predatory behavior, including nearly 135,000 Instagram accounts that were leaving sexualized comments or requesting sexual images from adult-managed accounts featuring children. Meta has also placed teen and child-representing accounts into strict message and comment settings, filtering out offensive messages and limiting contact with unknown accounts.

These measures come as part of a broader push by Meta to protect teens and children on its platforms, following mounting scrutiny from policymakers who accused the company of failing to shield young users from sexual exploitation. The company has also announced efforts to combat “spammy content” and reduce addictive features across its apps that have detrimental effects on children’s mental health.

Regulatory efforts are also underway, with Congress reintroducing the Kids Online Safety Act, which would require social media platforms to have a “duty of care” to prevent harm to children.

Source: https://www.cnbc.com/2025/07/23/meta-instagram-teen-safety.html