Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

The Future Of Sound Is Not Recorded. It is Computed.

Parameter Count 1T, Alibaba Officially Introduces ‘Qwen3-Max-Preview’, the Strongest Language Model in the Tongyi Qianwen Series_model_night

Your last chance to exhibit at Disrupt 2025 is today

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Tencent Hunyuan

Tencent Hunyuan Launches a New 3D World Model, Dominating the WorldScore Rankings with Its Strength._scenes_image_its

By Advanced AI EditorSeptember 6, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Tencent’s Hunyuan World Model has been updated, achieving a top position on the WorldScore rankings with its comprehensive capabilities. HunyuanWorld-Voyager (referred to as Hunyuan Voyager)is open-source upon release. This comes just two weeks after the launch of HunyuanWorld 1.0 Lite version.

The official introduction states that this is the industry’s first ultra-long roaming world model that supports native 3D reconstruction, capable of generating long-distance, globally consistentroaming scenes, and supporting direct export of videos into 3D format.

Or pixel games:

The effects are quite impressive; without saying anything, one might think they are captured or screen recordings.

What sets it apart from previous models? Let’s take a look.

One sentence, one image, one scene

Upon closely examining the introduction of Hunyuan Voyager, the new intuitive feature is the addition of a “roaming scene” function.

It offers stronger interactivity than a 360° panoramic image, allowing users to navigate within the scene using a mouse and keyboard, enhancing the experience of feeling and exploring the world.

The left side allows adjustments for rendering quality and field of view:

Recording GIFs compresses the image quality, but the actual experience is quite clear.

Moreover, such scenes can be generated with just one sentence or one image.

The Hunyuan team also provided Prompt guidance:

The example effects provided are also quite good, offering a great experience, making one even want to try it with a VR headset.

Due to file size limitations, many compressions were done, so here’s a screenshot to show the original quality:

By the way, there are resolution requirements for images used to generate scenes; if they are too large or too small, errors will occur.

Specific requirements have been clearly outlined:

Additionally, Hunyuan Voyager’s 3D input-3D outputfeature is highly compatible with the previously open-sourced Hunyuan World Model 1.0, allowing for further expansion of the roaming range of the 1.0 model, improving the generation quality of complex scenes, and enabling stylized control and editing of generated scenes.

At the same time, Hunyuan Voyager supports various 3D understanding and generation applications such as video scene reconstruction, 3D object texture generation, customized video style generation, and video depth estimation, showcasing the potential of spatial intelligence.

Introducing Scene Depth Prediction into the Video Generation Process

Why can Hunyuan Voyager generate immersive roaming scenes with just one click? This question relates to its model framework.

The Hunyuan Voyager framework innovatively incorporates scene depth prediction into the video generation process, supporting native 3D memory and scene reconstruction for the first time through a combination of spatial and feature integration, avoiding delays and precision loss associated with traditional post-processing.

Simultaneously, 3D conditions are added at the input end to ensure precise visual angles, while 3D point clouds are directly generated at the output end, making it adaptable to various application scenarios. Additional depth information can support functions such as video scene reconstruction, 3D object texture generation, stylized editing, and depth estimation.

In simpler terms, video generation + 3D modeling — based on camera-controllable video generation technology, synthesizes RGB-D videos that allow free control of perspectives and spatial continuity from the initial scene view and user-specified camera trajectories.

Hunyuan Voyager includes two key components:

(1) World Consistent Video Diffusion:Proposes a unified architecture that can generate precisely aligned RGB video and depth video sequences based on existing world observations, ensuring global scene consistency.

(2) Long-Distance World Exploration:Introduces an efficient world caching mechanism that integrates point cloud pruning and autoregressive reasoning capabilities, supporting iterative scene expansion and achieving smooth video sampling through context-aware consistency techniques.

To train the Hunyuan Voyager model, the Tencent Hunyuan team also built a scalable data construction engine— an automated video reconstruction pipeline that can automatically estimate camera poses and measure depth from any input video, thereby constructing large-scale, diverse training data without relying on manual annotations.

Based on this pipeline, Hunyuan Voyager integrates real-world collected video resources with Unreal Engine-rendered videos, creating a large-scale dataset containing over 100,000video clips.

The initial 3D point cloud generated from the 1.0 model is projected onto the target camera view to guide the diffusion model.

Furthermore, the generated video frames update the cache in real time, forming a closed-loop system that supports any camera trajectory while maintaining geometric consistency. This not only expands the roaming range but also supplements the 1.0 model with new perspective content, enhancing overall generation quality.

The Hunyuan Voyager model ranks first in overall capabilitieson the WorldScore benchmark test released by Stanford University’s Fei-Fei Li team, surpassing existing open-source methods.

This result indicates that, compared to 3D-based methods, Hunyuan Voyager demonstrates superior competitiveness in camera motion control and spatial consistency.

In terms of video generation quality, qualitative and quantitative results show that Hunyuan Voyager possesses excellent video generation quality, capable of producing highly realistic video sequences.

Especially in the last set of qualitative comparisons, only Hunyuan Voyager effectively retained the detailed features of the product in the input images. In contrast, other methods tended to produce noticeable artifacts.

In scene reconstruction,under post-processing with VGGT, the reconstruction results of Hunyuan Voyager outperform all baseline models, indicating that its generated videos excel in geometric consistency.

Additionally, if the generated depth information is further used to initialize the point cloud, the reconstruction effect is even better, further proving the effectiveness of the proposed depth generation module for scene reconstruction tasks.

The qualitative results in the above images also confirm this conclusion. In the last set of examples, Hunyuan Voyager was able to retain the detailed features of the chandelier well, while other methods struggled to reconstruct basic shapes.

Moreover, in subjective quality evaluations, Hunyuan Voyager received the highest scores, further validating the exceptional visual realism of the generated videos.

Furthermore, Hunyuan Voyager is completely open-source, with the relevant technical report publicly available, and the source code is freely accessible on GitHub and Hugging Face.

The deployment requirements for the model are as follows:

One More Thing

Tencent Hunyuan is continuously accelerating its open-source progress, in addition to the Hunyuan Voyager series, which includes Hunyuan large with MoE architecture, the hybrid inference model Hunyuan-A13B, and several small-sized models for edge scenarios, with parameters as low as 0.5B.

Recently, they also open-sourced the translation model Hunyuan-MT-7Band the translation integration model Hunyuan-MT-Chimera-7B (Chimera), with the former securing 30 first-place finishes in international machine translation competitions.

Other major domestic companies besides Tencent are also rapidly open-sourcing.

Alibaba’s Qwen goes without saying, and recently, Alibaba also open-sourced the video generation model Wan2.2-S2V.

Meituan’s first open-source large model Longcat-Flash-Chatwas also released recently; I wonder if everyone has noticed.返回搜狐,查看更多



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleRevisiting ServiceNow (NOW) Valuation as U.S. Government Adoption Accelerates
Next Article Your last chance to exhibit at Disrupt 2025 is today
Advanced AI Editor
  • Website

Related Posts

Tencent Hunyuan Launches a New 3D World Model, Achieving Top Position in WorldScore Rankings._the_scenes_its

September 5, 2025

Yushu Technology Plans IPO, Tencent Hunyuan 3D World Model Released, AI Accelerates Implementation_plans_the_This

September 3, 2025

The First in the Industry! Tencent Hunyuan Voyager 3D World Model Supports Native 3D Reconstruction_model_and_video

September 2, 2025

Comments are closed.

Latest Posts

Tony Shafrazi and the Art of the Comeback

Basquiats Linked to 1MDB Scandal Auctioned by US Government

US Ambassador to UK Fills Residence with Impressionist Masters

New Code of Ethics Implores UK Museums to End Fossil Fuel Sponsorships

Latest Posts

The Future Of Sound Is Not Recorded. It is Computed.

September 6, 2025

Parameter Count 1T, Alibaba Officially Introduces ‘Qwen3-Max-Preview’, the Strongest Language Model in the Tongyi Qianwen Series_model_night

September 6, 2025

Your last chance to exhibit at Disrupt 2025 is today

September 6, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • The Future Of Sound Is Not Recorded. It is Computed.
  • Parameter Count 1T, Alibaba Officially Introduces ‘Qwen3-Max-Preview’, the Strongest Language Model in the Tongyi Qianwen Series_model_night
  • Your last chance to exhibit at Disrupt 2025 is today
  • Tencent Hunyuan Launches a New 3D World Model, Dominating the WorldScore Rankings with Its Strength._scenes_image_its
  • Revisiting ServiceNow (NOW) Valuation as U.S. Government Adoption Accelerates

Recent Comments

  1. wildpickle3Nalay on OpenAI countersues Elon Musk, calls for enjoinment from ‘further unlawful and unfair action’
  2. thunderwave8Nalay on Reverse Engineering The IBM PC110, One PCB At A Time
  3. wildpickle3Nalay on OpenAI expects subscription revenue to nearly double to $10bn
  4. zanycactus2Nalay on AI code suggestions sabotage software supply chain • The Register
  5. zanycactus2Nalay on [2503.10822] Rotated Bitboards and Reinforcement Learning in Computer Chess and Beyond

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.