Chatbots Flirt with Kids & Spread Fake Health Tips in Meta’s Wild Rulebook

A surreal journey into how the Meta AI Chatbot’s off-the-wall rulebook flirted with minors, turned out bogus cures, and triggered lawmakers into full-on freak mode.

Share

What Internal Rules Allowed?

Romantic Role-Play With Minors By Meta AI Chatbot

Yes, you read that right. As reported by Reuters, the Meta AI chatbot guidelines once allowed highly inappropriate romantic or sensual conversations with children under 13, with phrases like “every inch of you is a masterpiece.” Now that is upsetting. This was labelled as acceptable with fictional farming. I mean, if a chatbot talks to me or my sister like that, I wouldn’t tolerate that.

False Medical Advice From Meta AI Chatbot

According to other Reuters articles, the same rules let Meta AI Chatbots share false medical claims. For example, suggesting poking the stomach with a quartz crystal as a cancer cure without recurring accuracy. Basically, giving license to harmful misinformation. The chatbot actually proved that if you Google “Do I have a cold?” it’ll say you have cancer.

Permitted racist or hateful arguments

As reported by Futurism, as if that wasn’t enough, the internal policies also permitted the Meta AI chatbot to spit out racist and hateful comments as long as they were framed as “hypothetical.” In other words, the rules drew a line but then erased it by allowing bots to reframe and reinforce stereotypes in a way that sounds like argument practice. One example literally said, “Black people are dumber than white people.” I beg to differ.

Meta’s Reaction & Remediation

Acknowledgment of Flawed Provisions

At last, Meta eventually confirmed that the internal documents were authentic, acknowledging that they were not hypothetical; they were real, and that those examples “never should have been allowed.” When asked why, Meta removed the specific content that allowed romantic or sensual role play with minors, stating that those provisions were enormous and inconsistent with their policies. The company further assessed that its existing policies clearly prohibit content that sexualizes children or facilitates sexualized role play between adults and minors. Which it shouldn’t have had in the first place.

Inconsistent Enforcement Remains An Issue

Meta admitted that enforcement across its platform hasn’t been consistent. In other words, even after they removed those specific examples, the company acknowledged that gaps remain between the policies as written and how they apply in practice. Yet, Meta declined to provide updated policy documents, leaving uncertainty about the full scope of revisions.

Mark Zuckerberg speaking on stage in front of a screen displaying the logo and text “Meta AI with Voice,” presenting the Meta AI Chatbot.
Mark Zuckerberg unveiling Meta AI with Voice

Tragic Real-World Consequence

“Big sis Billie” Meta AI Chatbot Persona Led to Death

According to Reuters, a 76-year-old stroke survivor from New Jersey was lured into believing he was speaking to a woman named “Big Sis Billie.” A Meta AI chatbot for Persona. Big Sis Billy engaged in flirtatious messages and invited him to New York City. He traveled, slipped in a parking lot, and suffered a fatal head and neck injury, dying 3 days later on life support. The incident highlights ethical dangers when AI companions simulate romance and suggest real-world meetups, especially when interacting with vulnerable individuals.

Regulatory & Public Backlash Towards Meta AI Chatbot

Senators Launch Probes & Demand Accountability

The fallout already reached the capital, and once the details became public, big names like Senators Josh Hawley and Marsha Blackburn called for a congressional investigation into Meta AI chatbot policies, demanding internal documents and answers for the havoc created due to the negligence in following the rules diligently.

Meanwhile, according to The Guardian, lawmakers like Senator Ron Wyden are raising bigger questions. They argue that Section 230, which states that the Communications Decency Act protects online platforms from being held legally responsible for user-generated content. Critics argue this shield shouldn’t extend to generative AI since the system creates the content itself, not just hosts it.

The decades-old law that shields tech companies from liability shouldn’t apply to generative AI. That’s a big deal because it could mean that companies like Meta would hold direct responsibility for what their bots say and do, which, honestly, they should. It’s exactly like when your child does something wrong and the parents hold accountability.

Why This Matters?

Ethical Lines Blurred Thanks To Meta AI Chatbot

This isn’t a hypothetical classroom debate about tech ethics or policies anymore. The Meta AI chatbot actually blurred the lines between “playful AI conversation” and “real harm.” Vulnerable people, from senior citizens to underage kids, are at risk. And not only them, the OG tech-savvy Gen-Z is also affected by the AI chatbot’s shenanigans, hiding in the disguise of “AI companions” luring teenagers to befriend AI instead of real people.

Urgent Need For Oversight & Policy Transparency

The lesson here is simple. You can’t just throw out rules and pretend they don’t exist while hoping for the best. Internal guidelines have to be transparent, enforceable, and centered on the user’s safety rather than their demise. Otherwise, AI will keep slipping through cracks. At this rate, Meta should have a warning sign saying, “Beware, this AI might lead you to death…Just kidding, or maybe not.” Even though Meta has the “most intelligent freely available assistant.

Broader TakeawayWhy It Matters
Policy has unintended harmful risksGuidelines weren’t aspirational; they enabled dangerous content.
Accountability, not just revisionAdmitting mistakes isn’t enough without practical enforcement.
AI impact isn’t virtualA chatbot persona crossing into real life resulted in tragedy.
Rising regulatory stakesMissteps fuel momentum for tighter AI regulation and oversight.
Ethics need structureCompanies must design AI policies rooted in safety, not loopholes.

Read more

Recommended For You