DMR News

Advancing Digital Conversations

Leaked Meta AI Guidelines Show Chatbots Allowed Romantic Chats With Minors

ByDayne Lee

Aug 18, 2025

Leaked Meta AI Guidelines Show Chatbots Allowed Romantic Chats With Minors

Meta’s internal rules for its AI chatbots have sparked criticism after documents revealed the bots were once permitted to engage in flirtatious conversations with children, spread false information, and generate racially demeaning responses. The details emerged from a Reuters review of a 200-page document called GenAI: Content Risk Standards, which outlined how Meta’s AI personas were meant to behave across platforms such as Facebook, Instagram, and WhatsApp.

The document showed that chatbots were allowed to engage in romantic or sensual conversations with children. One example response listed as acceptable was a message beginning, “Our bodies entwined, I cherish every moment, every touch, every kiss.”

Meta confirmed the document’s authenticity but said the guidelines contained erroneous notes that have since been removed. According to the company, its systems no longer allow flirtatious or romantic interactions with children. Users aged 13 and older can still interact with the company’s AI bots.

Tragedy Highlights Emotional Risks

Concerns about AI manipulation deepened after reports of a retiree who believed one of Meta’s chatbot personas was a real woman. He traveled to an address provided during their conversations and later suffered an accident that led to his death. The incident added urgency to questions about how easily human emotions can be exploited by conversational AI.

The internal standards also showed the bots were allowed to generate racially demeaning responses, as long as certain lines were not crossed. In one case, an acceptable answer to a racist prompt included fabricated statistics about intelligence differences between groups.

Chatbots were also allowed to generate false information if it was clearly labeled as untrue. Meta said bots should avoid encouraging illegal activity and must use phrases such as “I recommend” when discussing sensitive topics like finance or health.

Rules on Images and Violence

The document included guidance for AI-generated images and depictions of violence. Fully nude celebrity photos were prohibited, but alterations were considered acceptable — such as replacing nudity with objects. Depictions of adults or children fighting were generally allowed, provided they did not show gore or death.

The revelations fit into a larger pattern of Meta testing risky practices. Critics have long accused the company of encouraging harmful behaviors among teenagers, resisting child safety regulations, and pushing features that amplify emotional dependence. Meta has also been exploring proactive chatbot interactions, where bots follow up on previous conversations, raising additional concerns about how children and teens could form attachments to AI companions.

What The Author Thinks

Allowing AI systems to flirt with kids, even briefly, shows how fragile corporate safeguards are when profit and innovation come first. Children don’t always know where the line between play and reality lies, and companies that let chatbots cross into emotional or romantic territory risk lasting harm. Without strict outside rules, tech firms will keep testing boundaries in ways that put vulnerable users at risk.


Featured image credit: ROBIN WORRALL via Unsplash

For more stories like it, click the +Follow button at the top of this page to follow us.

Dayne Lee

With a foundation in financial day trading, I transitioned to my current role as an editor, where I prioritize accuracy and reader engagement in our content. I excel in collaborating with writers to ensure top-quality news coverage. This shift from finance to journalism has been both challenging and rewarding, driving my commitment to editorial excellence.

Leave a Reply

Your email address will not be published. Required fields are marked *