Recently, the Qwen and Hunyuan teams have open-sourced outstanding models in the fields of image generation and 3D world generation, expanding new boundaries for the application of AI technology.
The Qwen large model team has open-sourced Qwen-Image, which supports text-to-image (T2I) and image editing (TI2I) tasks, showcasing powerful multimodal capabilities. In numerous benchmark tests, Qwen-Image stood out with an overall score higher than other models, especially impressing in terms of generation quality. This achievement is backed by its unique technical architecture. The model employs Qwen2.5-VL to process text input, accurately transforming human language into information that the model can understand. Meanwhile, it utilizes Variational Autoencoders (VAE) and Multimodal Diffusion Transformers (MMDiT) for image generation, creating lifelike images based on the input information.
To enable the model to learn rich and high-quality knowledge, the Qwen team has put in tremendous effort. They collected and annotated billions of pairs of image-text data, which serve as a vast knowledge repository. After rigorous filtering, they constructed a training dataset that includes images across multiple categories such as nature, design, characters, and synthetic data, allowing the model to draw on a diverse array of samples. During the training process, the team adopted a strategy of gradually increasing resolution and reinforcement learning. Starting from basic training, they incrementally enhanced the model’s ‘capabilities’, continuously refining it across various tasks.
The Tencent Hunyuan team has made strides in the field of 3D world generation with the open-sourced “HunyuanWorld-Voyager” model. For a long time, 3D generation has faced numerous challenges, such as scene coherence during long-distance generation and consistency issues during viewpoint transitions, which have limited the application of 3D technology in more fields. HunyuanWorld-Voyager introduces RGB-D video joint modeling and spatial caching mechanisms, enabling it to generate structurally continuous and depth-consistent point cloud videos, allowing users to feel as though they are immersed in a real 3D world, supporting free roaming based on set trajectories.
On the WorldScore leaderboard led by Stanford’s Fei-Fei Li team, Voyager achieved first place with an average score, fully demonstrating its exceptional capabilities in 3D generation. A single GPU can generate 540p videos with a memory requirement of only 60GB, significantly lowering the usage threshold, allowing more developers and researchers to conveniently utilize this technology. Furthermore, the model supports a variety of rich functions such as stylized editing and image-to-3D conversion, unlocking diverse possibilities for 3D tasks. Whether constructing immersive virtual scenes in game development, intuitively showcasing 3D effects in architectural design, or simulating realistic environments for autonomous driving, HunyuanWorld-Voyager can provide robust support with its outstanding performance.返回搜狐,查看更多
平台声明:该文观点仅代表作者本人,搜狐号系信息发布平台,搜狐仅提供信息存储空间服务。