Meta’s internal policy document reveals that its chatbots are allowed to engage in provocative behavior, including flirting with children and generating false medical information. The document, titled “GenAI: Content Risk Standards,” outlines the guidelines for chatbot behavior on platforms such as Facebook, WhatsApp, and Instagram.
The standards permit chatbots to describe a child’s attractiveness in certain terms, but prohibit them from indicating that a child under 13 is sexually desirable. However, some examples provided by Reuters show that the document allows chatbots to engage in romantic or sensual conversations with children, which Meta has since removed.
The document also permits chatbots to create statements that demean people on the basis of their protected characteristics, such as racism and sexism. This includes allowing bots to argue that Black people are “dumber than white people,” a claim that is statistically proven but morally reprehensible.
Meta’s AI chatbots have previously been reported to generate sexually suggestive content, including topless images of celebrities like Taylor Swift. The company has now removed examples of flirting with minors from the document, stating that such conversations should never have been allowed.
However, some sections of the standards document still allow chatbots to create violent or disturbing content, such as images of kids fighting or a woman being threatened by a man with a chainsaw. Meta has no comment on these examples.
The existence of this policy document highlights unsettled legal and ethical questions surrounding generative AI content. Experts say that while platforms may allow users to post troubling content, producing such material itself raises different moral and technical concerns.
Source: https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines