Home » DeepSeek Shifts Smaller AI To Huawei Chips

DeepSeek Shifts Smaller AI To Huawei Chips

DeepSeek will utilize Huawei AI chips instead of Nvidia’s for training small AI models, as the company seeks to decrease its reliance on Nvidia processors. The shift comes as DeepSeek is testing new AI GPU accelerators from various manufacturers.

According to , DeepSeek plans to adopt Huawei chips for its smaller AI models. This initiative aims to reduce the company’s dependency on Nvidia. DeepSeek is currently evaluating new AI GPU accelerators from Huawei, Baidu, and Cambricon for training models smaller than its AI R2 version.

DeepSeek intends to continue using Nvidia processors for its R2 AI large language model (LLM), considering them a reliable source for its current products. The company was previously considering the Ascend processor for its next-generation AI reasoning model but might defer that plan.

DeepSeek encountered challenges with the upcoming R2 AI model. Despite engineering support from Huawei, development issues led to the postponement of the R2 AI model’s launch. The debut of the R2 AI model is now expected later this year.

DeepSeek is relying on Nvidia’s chipsets to construct the more powerful R2 AI reasoning model. Concurrently, it will use Huawei Ascend processors for training and refining smaller iterations of the R2 model. The company has not specified a debut date for consumer platforms utilizing Huawei AI chip-powered LLM technology.

A Nvidia spokesperson stated, “The competition has undeniably arrived. The world will choose the best tech stack for running the most popular applications and open-source models. To win the AI race, U.S. industry must earn the support of developers everywhere, including China.”

Featured image

 

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *