Home » Anthropic Bans OpenAI From Using Claude For Training

Anthropic Bans OpenAI From Using Claude For Training

Anthropic has terminated OpenAI’s access to its Claude family of AI models, citing violations of its terms of service. OpenAI was reportedly integrating Claude with internal tools to evaluate its performance against OpenAI’s own models across categories such as coding, writing, and safety.

An Anthropic spokesperson informed TechCrunch that “OpenAI’s own technical staff were also using our coding tools ahead of the launch of GPT-5,” which constitutes “a direct violation of our terms of service.” Anthropic’s commercial terms specifically prohibit companies from utilizing Claude to develop competing services. Despite the termination of general access, Anthropic stated it would maintain OpenAI’s access for “the purposes of benchmarking and safety evaluations.”

OpenAI responded to the decision, with a spokesperson characterizing its usage as “industry standard.” The spokesperson added, “While we respect Anthropic’s decision to cut off our API access, it’s disappointing considering our API remains available to them.” This action aligns with Anthropic’s established stance on competitor access; Chief Science Officer Jared Kaplan previously justified cutting off Windsurf, stating, “I think it would be odd for us to be selling Claude to OpenAI.” Windsurf was later acquired by Cognition and had been rumored as an OpenAI acquisition target.


Featured image credit

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *