It sounds like the setup to a modern horror story: your browser history—or in this case, your AI chats—have been public all along, and you didn’t even know. That’s the unsettling reality with Meta’s new stand-alone AI app, where many users are unintentionally sharing what they thought were private conversations with the chatbot.
When users ask questions, they can hit a share button that leads to a preview screen, allowing them to publish their exchanges. But it seems many don’t realize that these text chats, audio snippets, and images become visible to the world.
Privacy Risks Beyond the Surface
For example, one morning brought an unexpected audio clip of a man with a Southern accent asking, “Hey, Meta, why do some farts stink more than others?” While flatulence questions are just the surface, deeper privacy issues emerge. Some users ask the AI for help with sensitive matters—like tax evasion strategies, whether family members might face arrest due to proximity to white-collar crime, or how to draft a character reference letter with full names included. Security expert Rachel Tobac even found instances of people’s home addresses and confidential court details shared openly.
Meta has not publicly commented on the privacy concerns raised. This situation is a nightmare for user privacy. The app doesn’t clearly indicate what privacy settings are active or where posts are being shared. For instance, if you log in with your public Instagram account, then anything you search or share in Meta AI, even bizarre queries about meeting “big booty women,” is public.
A lot of these problems could have been avoided had Meta not designed the app to encourage sharing of AI conversations, or had they foreseen the fallout from such a feature. There’s a reason Google never turned search results into a social media feed, and why AOL’s infamous 2006 release of pseudonymized search histories was a disaster. This is a recipe for privacy breaches and embarrassment.
Popularity Amid Privacy Concerns
Despite these issues, the Meta AI app has been downloaded 6.5 million times since its April 29 launch—a strong number for an independent app, but notable considering this comes from one of the world’s richest tech giants.
As time goes on, more questionable and troll-like posts flood the platform. People are sharing résumés while asking for cybersecurity jobs, and others with memes for avatars are asking how to build water bottle bongs. Public humiliation may well be the app’s most effective strategy for gaining users.
Author’s Opinion
Meta’s AI app shows that even massive tech companies can stumble when it comes to user privacy. Designing systems that default to public sharing without clear, upfront warnings betrays user trust and risks serious harm. If AI is to truly serve users, privacy controls need to be straightforward, transparent, and mandatory—not an afterthought or hidden feature.
Featured image credit: CarbonCredits
For more stories like it, click the +Follow button at the top of this page to follow us.