Speculation is swirling across the semiconductor industry after artificial intelligence firm DeepSeek recently suggested that China’s next-generation AI chips will soon be released.
The Hangzhou-based start-up’s cryptic one-line post on WeChat triggered online discussions about which AI chip supplier, or suppliers, would unveil this breakthrough, even as US tech restrictions remain in place.
They include Huawei Technologies, Cambricon Technologies, Moore Threads, Hygon Information Technology and MetaX Integrated Circuits.
Do you have questions about the biggest topics and trends from around the world? Get the answers with SCMP Knowledge, our new platform of curated content with explainers, FAQs, analyses and infographics brought to you by our award-winning team.
DeepSeek’s post on Thursday said the “UE8M0 FP8 scale” of its V3.1 AI model was particularly designed “for home-grown chips to be released soon”. Apart from not identifying the supplier, the firm did not specify how the new AI chips would be used – for training of models or inferencing, the stage where an AI system puts its learning into action.
“It’s also likely that the new model will support a number of AI chips, not just those from Huawei or another company,” Liu Jie, an engineer at a Shanghai-based developer of graphics processing units (GPUs), said on Friday.
The speculation reflects not only growing confidence in the capabilities of locally designed and produced integrated circuits, but also how China’s semiconductor industry has steadily overcome US tech sanctions.
Huawei Technology’s AI chips are expected to gain more users in China, as government regulators question the security of Nvidia’s widely adopted graphics processing units. Photo: Shutterstock alt=Huawei Technology’s AI chips are expected to gain more users in China, as government regulators question the security of Nvidia’s widely adopted graphics processing units. Photo: Shutterstock>
FP8, known as floating-point 8, is a format that reduces precision to speed up AI training and inference by using less memory and bandwidth. UE8M0, another 8-bit format, is touted to increase training efficiency, which reduces hardware requirements by cutting memory use by up to 75 per cent.
“The architecture is specifically designed to accommodate the hardware logic of Chinese chips, which enables a model to run smoothly on these hardware,” said Su Lian Jye, chief analyst at research firm Omdia.
He added that Chinese-designed chips that are currently capable of supporting FP8 include those from Huawei’s HiSilicon, Cambricon, MetaX and Moore Threads.
Story Continues