Home » Meta Paid A $10,000 Bounty For A Major AI Privacy Flaw

Meta Paid A $10,000 Bounty For A Major AI Privacy Flaw

Meta addressed a security flaw within its Meta AI chatbot, which permitted users to view the private prompts and AI-generated responses of other individuals. Sandeep Hodkasia, founder of AppSecure, disclosed this vulnerability to TechCrunch, confirming Meta paid him a $10,000 bug bounty reward for his private disclosure filed on December 26, 2024.

Hodkasia stated Meta deployed a fix on January 24, 2025, adding that no evidence of malicious exploitation of the bug was found. He explained to TechCrunch that he identified the vulnerability by examining Meta AI’s mechanism for allowing logged-in users to edit their AI prompts to regenerate text and images.

Hodkasia discovered that upon a user editing their prompt, Meta’s backend servers assigned a unique identification number to the prompt and its corresponding AI-generated response. By analyzing network traffic in his browser while editing an AI prompt, Hodkasia determined he could alter this unique number, resulting in Meta’s servers returning a prompt and AI-generated response belonging to a different user.

The bug indicated that Meta’s servers were not adequately verifying user authorization to view specific prompts and responses. Hodkasia noted the prompt numbers generated by Meta’s servers were “easily guessable,” which could have enabled an unauthorized actor to systematically retrieve other users’ original prompts by rapidly altering prompt numbers using automated tools. Meta confirmed to TechCrunch that the bug was fixed in January.

Meta spokesperson Ryan Daniels stated, “found no evidence of abuse and rewarded the researcher.” This bug disclosure occurs as technology companies accelerate the launch and refinement of AI products, despite inherent security and privacy concerns. Meta AI’s standalone application, introduced earlier this year to compete with rival applications, faced initial issues, including instances where users inadvertently shared what they believed were private conversations with the chatbot publicly.


Featured image credit

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *