Tencent’s Hunyuan World Model has been updated, achieving a top position on the WorldScore rankings with its comprehensive capabilities. HunyuanWorld-Voyager (referred to as Hunyuan Voyager)is open-source upon release. This comes just two weeks after the launch of HunyuanWorld 1.0 Lite version.
The official introduction states that this is the industry’s first ultra-long roaming world model that supports native 3D reconstruction, capable of generating long-distance, globally consistentroaming scenes, and supporting direct export of videos into 3D format.
Or pixel games:
The effects are quite impressive; without saying anything, one might think they are captured or screen recordings.
What sets it apart from previous models? Let’s take a look.
One sentence, one image, one scene
Upon closely examining the introduction of Hunyuan Voyager, the new intuitive feature is the addition of a “roaming scene” function.
It offers stronger interactivity than a 360° panoramic image, allowing users to navigate within the scene using a mouse and keyboard, enhancing the experience of feeling and exploring the world.
The left side allows adjustments for rendering quality and field of view:
Recording GIFs compresses the image quality, but the actual experience is quite clear.
Moreover, such scenes can be generated with just one sentence or one image.
The Hunyuan team also provided Prompt guidance:
The example effects provided are also quite good, offering a great experience, making one even want to try it with a VR headset.
Due to file size limitations, many compressions were done, so here’s a screenshot to show the original quality:
By the way, there are resolution requirements for images used to generate scenes; if they are too large or too small, errors will occur.
Specific requirements have been clearly outlined:
Additionally, Hunyuan Voyager’s 3D input-3D outputfeature is highly compatible with the previously open-sourced Hunyuan World Model 1.0, allowing for further expansion of the roaming range of the 1.0 model, improving the generation quality of complex scenes, and enabling stylized control and editing of generated scenes.
At the same time, Hunyuan Voyager supports various 3D understanding and generation applications such as video scene reconstruction, 3D object texture generation, customized video style generation, and video depth estimation, showcasing the potential of spatial intelligence.
Introducing Scene Depth Prediction into the Video Generation Process
Why can Hunyuan Voyager generate immersive roaming scenes with just one click? This question relates to its model framework.
The Hunyuan Voyager framework innovatively incorporates scene depth prediction into the video generation process, supporting native 3D memory and scene reconstruction for the first time through a combination of spatial and feature integration, avoiding delays and precision loss associated with traditional post-processing.
Simultaneously, 3D conditions are added at the input end to ensure precise visual angles, while 3D point clouds are directly generated at the output end, making it adaptable to various application scenarios. Additional depth information can support functions such as video scene reconstruction, 3D object texture generation, stylized editing, and depth estimation.
In simpler terms, video generation + 3D modeling — based on camera-controllable video generation technology, synthesizes RGB-D videos that allow free control of perspectives and spatial continuity from the initial scene view and user-specified camera trajectories.
Hunyuan Voyager includes two key components:
(1) World Consistent Video Diffusion:Proposes a unified architecture that can generate precisely aligned RGB video and depth video sequences based on existing world observations, ensuring global scene consistency.
(2) Long-Distance World Exploration:Introduces an efficient world caching mechanism that integrates point cloud pruning and autoregressive reasoning capabilities, supporting iterative scene expansion and achieving smooth video sampling through context-aware consistency techniques.
To train the Hunyuan Voyager model, the Tencent Hunyuan team also built a scalable data construction engine— an automated video reconstruction pipeline that can automatically estimate camera poses and measure depth from any input video, thereby constructing large-scale, diverse training data without relying on manual annotations.
Based on this pipeline, Hunyuan Voyager integrates real-world collected video resources with Unreal Engine-rendered videos, creating a large-scale dataset containing over 100,000video clips.
The initial 3D point cloud generated from the 1.0 model is projected onto the target camera view to guide the diffusion model.
Furthermore, the generated video frames update the cache in real time, forming a closed-loop system that supports any camera trajectory while maintaining geometric consistency. This not only expands the roaming range but also supplements the 1.0 model with new perspective content, enhancing overall generation quality.
The Hunyuan Voyager model ranks first in overall capabilitieson the WorldScore benchmark test released by Stanford University’s Fei-Fei Li team, surpassing existing open-source methods.
This result indicates that, compared to 3D-based methods, Hunyuan Voyager demonstrates superior competitiveness in camera motion control and spatial consistency.
In terms of video generation quality, qualitative and quantitative results show that Hunyuan Voyager possesses excellent video generation quality, capable of producing highly realistic video sequences.
Especially in the last set of qualitative comparisons, only Hunyuan Voyager effectively retained the detailed features of the product in the input images. In contrast, other methods tended to produce noticeable artifacts.
In scene reconstruction,under post-processing with VGGT, the reconstruction results of Hunyuan Voyager outperform all baseline models, indicating that its generated videos excel in geometric consistency.
Additionally, if the generated depth information is further used to initialize the point cloud, the reconstruction effect is even better, further proving the effectiveness of the proposed depth generation module for scene reconstruction tasks.
The qualitative results in the above images also confirm this conclusion. In the last set of examples, Hunyuan Voyager was able to retain the detailed features of the chandelier well, while other methods struggled to reconstruct basic shapes.
Moreover, in subjective quality evaluations, Hunyuan Voyager received the highest scores, further validating the exceptional visual realism of the generated videos.
Furthermore, Hunyuan Voyager is completely open-source, with the relevant technical report publicly available, and the source code is freely accessible on GitHub and Hugging Face.
The deployment requirements for the model are as follows:
One More Thing
Tencent Hunyuan is continuously accelerating its open-source progress, in addition to the Hunyuan Voyager series, which includes Hunyuan large with MoE architecture, the hybrid inference model Hunyuan-A13B, and several small-sized models for edge scenarios, with parameters as low as 0.5B.
Recently, they also open-sourced the translation model Hunyuan-MT-7Band the translation integration model Hunyuan-MT-Chimera-7B (Chimera), with the former securing 30 first-place finishes in international machine translation competitions.
Other major domestic companies besides Tencent are also rapidly open-sourcing.
Alibaba’s Qwen goes without saying, and recently, Alibaba also open-sourced the video generation model Wan2.2-S2V.
Meituan’s first open-source large model Longcat-Flash-Chatwas also released recently; I wonder if everyone has noticed.返回搜狐,查看更多