Reports that the Chinese government requested that DeepSeek's new AI model 'DeepSeek-R2' be developed using Huawei chips, but the request failed and the release was delayed


by

Tim Reckmann

Chinese AI startup DeepSeek attracted a lot of attention in January 2025 when it released its inference model, DeepSeek-R1, as open source . Its powerful model achieves high performance while consuming minimal computing resources. However, the release of its new AI model, DeepSeek-R2, has been delayed due to the Chinese government's demand that Huawei chips be used in the development of the AI model, according to a report by the Financial Times.

DeepSeek's next AI model delayed by tech issues with Chinese chips
https://www.ft.com/content/eb984646-6320-4bfe-a78d-a1da2274b092

DeepSeek's launch of new AI model delayed by Huawei chip issues, FT reports | Reuters
https://www.reuters.com/world/china/deepseeks-launch-new-ai-model-delayed-by-huawei-chip-issues-ft-reports-2025-08-14/

DeepSeek reportedly accused by Chinese authorities to train new model on Huawei hardware — after multiple failures, R2 training to switch back to Nvidia hardware while Ascend GPUs handle inference | Tom's Hardware
https://www.tomshardware.com/tech-industry/artificial-intelligence/deepseek-reportedly-urged-by-chinese-authorities-to-train-new-model-on-huawei-hardware-after-multiple-failures-r2-training-to-switch-back-to-nvidia-hardware-while-ascend-gpus-handle-inference

DeepSeek-R1's training cost is said to be about 3% of OpenAI's inference model 'o1,' and the theoretical profit margin against the cost can reach up to 545% per day . In addition, since the model data is publicly available, it is also noteworthy that users can run it on their servers or locally.

Why is DeepSeek causing such a fuss and what's so great about it?



The next-generation DeepSeek model, DeepSeek-R2, was reported to be released in May 2025 , but as of the time of writing, more than two months after the end of May, it has not been announced. The Financial Times reports that the reason for this was due to intervention by the Chinese government.

According to three witnesses familiar with the matter, after DeepSeek's success in training R1, the Chinese government urged it to use a platform based on Huawei's Ascend AI chip instead of its previous Nvidia hardware.

DeepSeek accepted the Chinese government's instructions and introduced Huawei chips into the development of the R2, but quickly ran into problems such as unstable performance, slow chip-to-chip connections, and limitations in Ascend's software platform, CANN .

Huawei reportedly sent a team of engineers to DeepSeek's data centers to try to resolve the issue, but they were never able to successfully train the company on the Ascend platform. Sources told the Financial Times that DeepSeek's failure to develop with Huawei chips has delayed the release of R2.


by Kārlis Dambrāns

Ultimately, DeepSeek decided to use NVIDIA chips for training the R2 model and Huawei chips for inference. Technology media Tom's Hardware points out that 'this mixed approach is a compromise born out of necessity rather than preference.'

Meanwhile, due to a shortage of Nvidia chips in China, many of DeepSeek's customers will also be using R2 on Huawei hardware, so Tom's Hardware said it makes sense to ensure the new AI models run on Huawei hardware.

in Web Service,   Hardware, Posted by log1h_ik