Opinions about Qwen3-Next vary widely in the community (e.g. from LocalLLaMA). Although the benchmarks place the model slightly lower than the large Qwen3-235B-A22B, the difference is not great. This is extremely remarkable, as it only has a third of the parameters and only a seventh of the active parameters.
Qwen3-Next could point the way to a future-oriented architecture, if it doesn’t turn out to be a dead end. Many innovations appear to have proved their worth; the model is extremely performant and not (much) worse than significantly larger models.
It would be exciting if a group with sufficient computing capacity tried to bring together the innovations of recent months: the Muon Optimiser from Kimi K2, the Goldfish Loss from Apertus, FP8 from DeepSeek (or MXFP4 from GPT-OSS), a further developed GRPO (QRPO), the hybrid attention from Qwen3 and the latent attention from DeepSeek. These are adjusting screws in the form of technical hyperparameters that can be tweaked. Perhaps the budget required for this is not even large: DeepSeek only needed 300,000 dollars for the entire V3 training, as the reviewers from Nature magazine found out.
Alibaba is currently adding new multimodal Qwen Omni models, Qwen Image Edit and Tongyi DeepResearch 30B-A3B on an almost daily basis.
(afl)
Dieser Link ist leider nicht mehr gültig.
Links zu verschenkten Artikeln werden ungültig,
wenn diese älter als 7 Tage sind oder zu oft aufgerufen wurden.
Sie benötigen ein heise+ Paket, um diesen Artikel zu lesen. Jetzt eine Woche unverbindlich testen – ohne Verpflichtung!