Meta has fixed a security vulnerability that allowed users of its AI chatbot to access private prompts and AI-generated responses of other users.
Sandeep Hodkasia, founder of security testing firm AppSecure, discovered the flaw and privately disclosed it to Meta on December 26, 2024. He received a $10,000 bug bounty reward for his responsible disclosure. Meta deployed the fix on January 24, 2025, and found no signs of malicious exploitation.
The bug stemmed from how Meta AI lets logged-in users edit their AI prompts to regenerate text and images. When a prompt is edited, the system assigns it a unique identifier. Hodkasia found he could manipulate this identifier to retrieve someone else’s prompt and AI response, as Meta’s servers did not properly verify user authorization. These prompt numbers were also easily guessable, raising concerns about automated scraping of user data.
Meta’s Response
Meta confirmed the bug was fixed in January and stated it found no evidence of abuse. The company rewarded Hodkasia for his responsible reporting.
This incident highlights the ongoing privacy and security risks tech companies face while rapidly developing and launching AI products. Meta AI’s standalone chatbot app, which debuted earlier this year to compete with other AI assistants, previously had issues where users unintentionally shared private conversations publicly.
What The Author Thinks
AI development moves fast, but security can’t be an afterthought. Incidents like this show how critical it is for companies to rigorously test AI systems for privacy leaks before launch. User trust depends on protecting sensitive data, especially when AI conversations may contain personal or confidential information.
Featured image credit: The New Arab
For more stories like it, click the +Follow button at the top of this page to follow us.