Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Gartner Charts the Rise of Agents, ModelOps, Synthetic Data, and AI Engineering

C3.ai: A Victim Of AI With Negative Growth, Non-GAAP Operating Losses (NYSE:AI)

NER Retriever: Zero-Shot Named Entity Retrieval with Type-Aware Embeddings – Takara TLDR

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Tencent Hunyuan

Tencent Hunyuan Launches a New 3D World Model, Achieving Top Position in WorldScore Rankings._the_scenes_its

By Advanced AI EditorSeptember 5, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Tencent’s Hunyuan World Model has been updated, achieving a top position in the WorldScore rankings with its comprehensive capabilities. HunyuanWorld-Voyager (abbreviated as Hunyuan Voyager)is open source upon release. This comes just two weeks after the release of HunyuanWorld 1.0 Lite version.

The official introduction states that this is the industry’s first super-long roaming world model supporting native 3D reconstruction, capable of generating long-distance, globally consistentroaming scenes, and supports directly exporting videos into 3D formats.

Or pixel games:

The effects are quite impressive; one might mistake them for real footage or screen recordings.

How does it differ from previous models? Let’s take a look.

One sentence, one image, one scene

Upon reviewing the introduction of Hunyuan Voyager, the intuitive performance added this time is actually a feature for “roaming scenes”.

It offers stronger interactivity than 360° panoramic images, allowing users to navigate within the scene using the mouse and keyboard, providing a better experience to explore the world.

The left side allows adjustments for rendering quality and field of view:

Recording GIFs compress the quality, but the actual experience is quite clear.

Moreover, such scenes can be generated with just a sentence or an image.

The Hunyuan team also provided prompt guidelines:

The example effects provided are also quite impressive, offering a great experience, making one even want to try it with VR glasses.

Due to file size limitations, it has been compressed multiple times; here’s a screenshot showing the original quality:

By the way, there are resolution requirements for images used to generate scenes; if they are too large or too small, an error will occur.

Specific requirements have also been clearly outlined:

Additionally, the Hunyuan Voyager’s 3D input – 3D outputfeature is highly compatible with the previously open-sourced Hunyuan World Model 1.0, allowing for further expansion of the roaming range of the 1.0 model, enhancing the generation quality of complex scenes, and enabling stylized control and editing of the generated scenes.

At the same time, Hunyuan Voyager supports various applications in 3D understanding and generation, including video scene reconstruction, 3D object texture generation, customized video style generation, and video depth estimation, showcasing the potential of spatial intelligence.

Introducing scene depth prediction into the video generation process

Why can Hunyuan Voyager achieve one-click generation of immersive roaming scenes? This question relates to its model framework.

Hunyuan Voyager innovatively integrates scene depth prediction into the video generation process, supporting native 3D memory and scene reconstruction for the first time through a combination of spatial and feature methods, avoiding the delays and accuracy losses caused by traditional post-processing.

Additionally, adding 3D conditions at the input end ensures accurate viewing angles, while directly generating 3D point clouds at the output end, adapting to various application scenarios. The extra depth information can also support functions like video scene reconstruction, 3D object texture generation, stylized editing, and depth estimation.

In simpler terms, video generation + 3D modeling — based on camera-controllable video generation technology, synthesizes RGB-D videos that can be freely controlled in perspective and spatial coherence from the initial scene view and user-specified camera trajectory.

Hunyuan Voyager consists of two key components:

(1) World-consistent video diffusion:proposes a unified architecture that can generate precisely aligned RGB video and depth video sequences based on existing world observations, ensuring global scene consistency.

(2) Long-distance world exploration:proposes an efficient world caching mechanism that integrates point cloud pruning with autoregressive inference capabilities, supporting iterative scene expansion and achieving smooth video sampling through context-aware consistency techniques.

To train the Hunyuan Voyager model, the Tencent Hunyuan team has also built a scalable data construction engine— an automated video reconstruction pipeline that can automatically estimate camera poses and measure depth for any input video, allowing for large-scale, diverse training data construction without relying on manual labeling.

Based on this pipeline, Hunyuan Voyager integrates video resources collected from the real world and rendered from the Unreal Engine, constructing a large-scale dataset containing over 100,000video clips.

The initial 3D point cloud cache generated from the 1.0 model is projected onto the target camera view to guide the diffusion model.

Moreover, the generated video frames will also update the cache in real-time, forming a closed-loop system that supports any camera trajectory while maintaining geometric consistency. This not only expands the roaming range but also supplements the 1.0 model with new perspective content, enhancing overall generation quality.

The Hunyuan Voyager model ranks first in comprehensive capabilitieson the WorldScore benchmark test released by Stanford University’s Fei-Fei Li team, surpassing existing open-source methods.

This result indicates that, compared to 3D-based methods, Hunyuan Voyager demonstrates superior competitiveness in camera motion control and spatial consistency.

In terms of video generation quality, qualitative and quantitative results show that Hunyuan Voyager possesses exceptional video generation quality, capable of producing highly realistic video sequences.

Especially in the last set of qualitative comparisons, only Hunyuan Voyager effectively retained the detailed features of the products in the input images. In contrast, other methods tend to produce obvious artifacts.

In terms of scene reconstruction,under the condition of post-processing with VGGT, the reconstruction results of Hunyuan Voyager outperform all baseline models, indicating that its generated videos demonstrate superior geometric consistency.

Additionally, if the generated depth information is further used to initialize the point cloud, the reconstruction effect improves, further proving the effectiveness of the proposed depth generation module for scene reconstruction tasks.

The qualitative results in the images above also corroborate this conclusion. In the last set of examples, Hunyuan Voyager can effectively retain the detailed features of the chandelier, while other methods struggle to reconstruct the basic shapes.

Moreover, in subjective quality evaluations, Hunyuan Voyager received the highest ratings, further validating that the generated videos possess exceptional visual authenticity.

Furthermore, Hunyuan Voyager is fully open source, with related technical reports publicly available, and the source code is freely accessible on GitHub and Hugging Face.

The requirements for model deployment are as follows:

One More Thing

Tencent Hunyuan is continually accelerating its open-source progress. In addition to the Hunyuan Voyager, the Hunyuan world model series also includes representative models like Hunyuan Large with MoE architecture, the hybrid reasoning model Hunyuan-A13B, and several small-size models aimed at edge scenarios, with the smallest having only 0.5B parameters.

Recently, they also open-sourced the translation model Hunyuan-MT-7Band the translation integrated model Hunyuan-MT-Chimera-7B, the latter having secured 30 first-place awards in international machine translation competitions.

Other domestic giants besides Tencent are also rapidly open-sourcing.

Alibaba’s Qwen goes without saying, and recently, Alibaba also open-sourced the video generation model Wan2.2-S2V.

Meituan’s first open-source large model Longcat-Flash-Chatwas also recently released; I wonder if everyone has been paying attention.返回搜狐,查看更多



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleInverse IFEval: Can LLMs Unlearn Stubborn Training Conventions to Follow Real Instructions? – Takara TLDR
Next Article Innovative Incentive Model Escalates Competition in the Large Model Field_launch_large_This
Advanced AI Editor
  • Website

Related Posts

Yushu Technology Plans IPO, Tencent Hunyuan 3D World Model Released, AI Accelerates Implementation_plans_the_This

September 3, 2025

The First in the Industry! Tencent Hunyuan Voyager 3D World Model Supports Native 3D Reconstruction_model_and_video

September 2, 2025

The Strongest Open Source Video Sound Effect Generation Model Released – Tencent Hunyuan_audio_videos

September 2, 2025

Comments are closed.

Latest Posts

Morning Links for September 5, 2025

Fan Conventions Are Drawing The Line On AI ‘Slop’

Sculptor Who Defined Minimalism Dies at 88

Amy Sherald’s Canceled Smithsonian Show Goes to Baltimore

Latest Posts

Gartner Charts the Rise of Agents, ModelOps, Synthetic Data, and AI Engineering

September 5, 2025

C3.ai: A Victim Of AI With Negative Growth, Non-GAAP Operating Losses (NYSE:AI)

September 5, 2025

NER Retriever: Zero-Shot Named Entity Retrieval with Type-Aware Embeddings – Takara TLDR

September 5, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Gartner Charts the Rise of Agents, ModelOps, Synthetic Data, and AI Engineering
  • C3.ai: A Victim Of AI With Negative Growth, Non-GAAP Operating Losses (NYSE:AI)
  • NER Retriever: Zero-Shot Named Entity Retrieval with Type-Aware Embeddings – Takara TLDR
  • Accelerating HPC and AI research in universities with Amazon SageMaker HyperPod
  • China’s AI influence abroad · Global Voices

Recent Comments

  1. SamuelUsami on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. slot demo pg soft lengkap on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. second hand-889 on 2 Volatile Stocks with Solid Fundamentals and 1 to Question
  4. SamuelUsami on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. DuxomdVed on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.