
(Source: Shutterstock)
Several developments around Beijing-based startup DeepSeek and its ripple effects on global AI competition surrounded the AI in China front this week. The headlines ranged from a hardware-driven delay of the firm’s next large model to a new release tuned for domestic chips. In the U.S., OpenAI CEO Sam Altman credited Chinese open-source pressure for prompting his company to adjust its own strategy on model transparency.
R2 Launch Postponed Amid Huawei Chip Push
DeepSeek’s follow-up to January’s R1, known internally as R2, was expected to debut earlier this summer. Instead, the rollout has been put on hold after the firm has been unable to complete training runs on Huawei’s Ascend 910C processors, according to the Financial Times. The report says Chinese authorities had “encouraged” DeepSeek to adopt Huawei silicon rather than Nvidia hardware, as part of China’s wider goal of replacing U.S. tech in AI systems.
The Huawei chips have stability issues compared to Nvidia’s products, according to the report, including reduced inter-GPU bandwidth and inadequate software. Though Huawei sent a team of engineers to assist with developing the R2 model, DeepSeek was unable to conduct a successful training run with the Ascend chip, but is still working with Huawei to make the model compatible with Ascend for inference.
The R2 delay highlights the main challenge facing China’s AI push: Beijing wants domestic AI firms to prove that Chinese chips can match U.S. products, yet today’s most advanced models still rely on Nvidia’s software stack and established developer tools. For DeepSeek, the missed launch window could leave it catching up to U.S. releases like OpenAI’s GPT-5. The company says it is working to resolve hardware issues before the end of the year, and Chinese media reports suggest that despite these setbacks, R2 could be released in the next few weeks.
DeepSeek V3.1 Debuts with Domestic Chip Mode
While R2 has been delayed, DeepSeek moved ahead with an incremental upgrade to its flagship V3 series. V3.1 adds an FP8 precision format that the company says is “optimized for soon-to-be-released next-generation domestic chips,” but did not specify any chips or manufacturers. The FP8 format stores each parameter in just eight bits rather than 16 or 32, and halves memory use, which can allow faster throughput on larger models with less bandwidth and compute. Additionally, similar to OpenAI’s GPT-5, Deepseek’s updated V3.1 model also introduces a hybrid inference switch that shifts outputs between reasoning and non-reasoning modes, which could save on inference costs.

(Source: Shutterstock)
Updating this model to work with Chinese hardware could mean DeepSeek is preparing for upcoming releases of better domestic chips. Another clue is in the market, as Chinese semiconductor stocks saw a big jump the day after DeepSeek’s announcement. Cambricon Technologies, a Beijing chipmaker founded in 2016, saw its stock jump 20% and market value to around $70 billion, while foundry giants Semiconductor Manufacturing International Corporation and Hua Hong Semiconductor rose 10% and 18% respectively. As U.S. export controls continue to tighten, it will be interesting to see what new chips could be released in China’s quest for AI self-reliance.
Sam Altman Cites DeepSeek in OpenAI’s Open-model Pivot
Recently, OpenAI released its first set of open-weight models since 2019, called gpt-oss. Sam Altman told CNBC that competition from Chinese open-source efforts like DeepSeek’s “was a factor” in that decision, noting, “It was clear that if we didn’t do it, the world was going to be mostly built on Chinese open source models.”
Altman’s remarks indicate a shift from January, when he praised DeepSeek’s R1 as “impressive” yet maintained that the scale of OpenAI’s compute capabilities would keep it ahead of competition. Altman also shared his thoughts on U.S. export controls: “My instinct is that doesn’t work,” he said. “You can export-control one thing, but maybe not the right thing … maybe people build fabs or find other workarounds.”
OpenAI’s open weight gpt-oss joins other open model families like Alibaba’s Qwen and Meta’s Llama. Models with open weights, or publicly downloadable parameters, allow researchers to run models on their own hardware instead of relying on a hosted API. This flexibility is vital for scientific use cases where reproducibility, transparency, security, and costs are paramount. The gpt-oss models were also released under the Apache 2.0 license, meaning researchers can freely use, modify, and redistribute the model, which could speed its adoption and innovation.
The Takeaway
DeepSeek’s hardware challenges, the V3.1 pivot toward domestic chips, and OpenAI’s open weight release reveal a tense feedback loop of chip policy, model design, and licensing strategy across the Pacific. China’s push for self-reliance is forcing its labs to innovate around domestic silicon while U.S. rivals still hold the hardware and software keys to the AI kingdom. For scientists, this loop could result in a wider choice of models and hardware, but it will also require careful benchmarking and cross-compatibility testing when using these tools for frontier-scale research.