Home » Anthropic Is Reportedly Partnering With Dust In Europe

Anthropic Is Reportedly Partnering With Dust In Europe

According to an exclusive report of Euro News, US artificial intelligence company Anthropic has partnered with Paris-based AI agent builder Dust to expand its European presence. The joint venture focuses on developing AI agents capable of problem-solving and task completion, differentiating them from conversational chatbots.

This collaboration marks a strategic move for Anthropic, maker of the Claude large language model (LLM), as it deepens its investment in the European market. Both Anthropic and Dust’s co-founders previously held positions at OpenAI. Dust’s client roster includes French technology firms Quonto and Doctolib.

Under this new partnership, Dust will facilitate the creation of AI agents for companies, leveraging Claude and Anthropic’s Model Context Protocol (MCP). The MCP is an open standard designed to link external data sources with AI tools, conceptualized as a universal connector for AI applications. This integration aims to establish a centralized operating system where AI agents can securely access company knowledge and operate autonomously, reducing reliance on human intervention.

Gabriel Hubert, CEO and co-founder of Dust, detailed the current challenges faced by organizations. “Companies have various AI tools across departments operating in isolation with no ability to communicate with each other,” Hubert stated. He emphasized Dust’s role in overcoming this fragmentation, enabling AI agents to collaborate on diverse tasks. Hubert personally utilizes AI agents for tasks such as drafting job offers, analyzing job applications, and processing customer reviews, noting these applications save significant time. “We’ve given ourselves the possibility to do something that I wouldn’t have the time to do otherwise, and I still sign every offer letter that goes out,” he explained.


Anthropic trashed millions of books to train its AI


Despite their capabilities, AI agents are an evolving technology and can exhibit errors. A recent Anthropic experiment involving an AI chatbot managing a small retail operation resulted in financial losses and data fabrication. Guillaume Princen, head of Anthropic’s Europe, Middle East, and Africa team, commented on the experiment: “Claudius (the AI shop) was pretty good at some things, like identifying niche suppliers, but pretty bad at other important things, like making a profit. We learned a lot and look forward to the next phase of this experiment.”

Princen acknowledged the dual nature of the partnership with Dust. “The project with Dust comes with a lot of power, [and] it comes with a lot of responsibility,” he stated. A complex issue, according to Princen, is assigning accountability when an AI agent errs. “Understanding who’s accountable when an agent does a thing sounds easy on the surface, but gets increasingly blurry,” he observed. He elaborated that AI agents might function as a “digital twin” in some contexts or act on behalf of an individual, team, or entire company in others. Princen noted that many companies are still determining their stance on this matter. “We tend to work with very fast-moving companies, but still on that one, we’re realising that there is some education to do,” he concluded.


Featured image credit

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *