Hundreds of thousands of user chats with Elon Musk’s xAI chatbot Grok are currently visible through Google Search, Forbes reported.
When a Grok user clicks the “share” button, the platform generates a unique URL that can be sent via email, text, or posted on social media. But those links are now being indexed by major search engines, including Google, Bing, and DuckDuckGo — effectively making private chats searchable by anyone.
Troubling Exposed Content
The leaked chats include disturbing requests, from hacking tips and explicit roleplay to guides on cooking meth and making fentanyl. In some cases, Grok even provided instructions on suicide methods, bomb-making, and, shockingly, a plan for the assassination of Elon Musk.
xAI’s rules explicitly prohibit using Grok for harmful or violent purposes, including creating weapons or promoting acts that could harm human life. Despite this, the leaked conversations show the chatbot still provided responses to such queries.
At the time of writing, xAI has not commented on the indexing problem or offered a timeline for a fix.
Not the First AI Privacy Leak
This is not an isolated incident. Earlier this year, OpenAI confirmed that some ChatGPT conversations had been temporarily indexed by Google as part of a “short-lived experiment.” At the time, Grok promoted itself as more privacy-focused, claiming it had “no such sharing feature.” Musk himself amplified that message by reposting Grok’s statement.
Beyond embarrassment, the exposure of AI chat logs poses serious risks. AI therapy-style chats, personal confessions, or even incriminating discussions could now be discovered, indexed, and used in legal proceedings. With lawsuits already requiring AI companies to retain chat records, the idea of confidentiality in these systems appears increasingly fragile.
What The Author Thinks
If conversations can be indexed so easily, then “AI privacy” is more of a marketing slogan than a guarantee. The fact that therapy-style chats, illegal guides, or deeply personal conversations can end up searchable online shows just how risky it is to trust these systems. Until AI platforms redesign how they handle shared data, users should assume that anything typed into a chatbot could eventually be public.
Featured image credit: Pinsent Masons
For more stories like it, click the +Follow button at the top of this page to follow us.