Anthropic is implementing new data handling procedures, requiring all Claude users to decide by September 28 whether their conversations can be used for AI model training. The company directed inquiries to its blog post about the policy changes, but external analyses have attempted to determine the reasons for the revisions.
The core change involves Anthropic now seeking to train its AI systems using user conversations and coding sessions. Previously, Anthropic did not utilize consumer chat data for model training. Now, data retention will be extended to five years for users who do not opt out of this data usage agreement. This represents a significant shift from previous practices.
Previously, users of Anthropic’s consumer products were informed that their prompts and the resulting conversation outputs would be automatically deleted from Anthropic’s back-end systems within 30 days. The exception occurred if legal or policy requirements dictated a longer retention period or if a user’s input was flagged for violating company policies. In such cases, user inputs and outputs could be retained for up to two years.
The new policies apply specifically to users of Claude Free, Pro, and Max, including users of Claude Code. Business customers utilizing Claude Gov, Claude for Work, Claude for Education, or those accessing the platform through its API, are exempt from these changes. This mirrors a similar approach adopted by OpenAI, which shields enterprise customers from data training policies.
Anthropic framed the change by stating that users who do not opt out will “help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations.” The company added that this data will “also help future Claude models improve at skills like coding, analysis, and reasoning, ultimately leading to better models for all users.”
This framing presents the policy change as a collaborative effort to improve the AI model. However, external analysis suggests that the underlying motivation is more complex, rooted in competitive pressures within the AI industry. Namely, Anthropic, like other large language model companies, requires substantial data to train its AI models effectively. Access to millions of Claude interactions would provide the real-world content needed to enhance Anthropic’s position against competitors such as OpenAI and Google.
The policy revisions also reflect broader industry trends concerning data policies. Companies like Anthropic and OpenAI face increasing scrutiny regarding their data retention practices. OpenAI, for example, is currently contesting a court order that compels the company to indefinitely retain all consumer ChatGPT conversations, including deleted chats. This order stems from a lawsuit filed by The New York Times and other publishers.
In June, OpenAI COO Brad Lightcap described the court order as “a sweeping and unnecessary demand” that “fundamentally conflicts with the privacy commitments we have made to our users.” The court order impacts ChatGPT Free, Plus, Pro, and Team users. Enterprise customers and those with Zero Data Retention agreements remain unaffected.
The frequent changes to usage policies have generated confusion among users. Many users remain unaware of these evolving policies. The rapid pace of technological advancements means that privacy policies are subject to change. Changes are often communicated briefly amid other company announcements.
Anthropic’s implementation of its new policy follows a pattern that raises concerns about user awareness. New users will be able to select their preference during the signup process. However, existing users are presented with a pop-up window labeled “Updates to Consumer Terms and Policies” in large text, accompanied by a prominent black “Accept” button. A smaller toggle switch for training permissions is located below, in smaller print, and is automatically set to the “On” position.
The design raises concerns that users might rapidly click “Accept” without fully realizing they are agreeing to data sharing. This observation was initially reported by The Verge. The stakes for user awareness are significant. Experts have consistently warned that the complexity surrounding AI makes obtaining meaningful user consent difficult.
The Federal Trade Commission (FTC) has previously intervened in these matters. The FTC warned AI companies against “surreptitiously changing its terms of service or privacy policy, or burying a disclosure behind hyperlinks, in legalese, or in fine print.” This warning suggests a potential conflict with practices that may not provide adequate user awareness or consent.
The current level of FTC oversight on these practices remains uncertain. The commission is currently operating with only three of its five commissioners. An inquiry has been submitted to the FTC to determine whether these practices are currently under review.