Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Perplexity AI tables $34.5 billion cash bid for Google’s Chrome amid Antitrust pressure

Moveworks Recognized as a Challenger in the 2025 Gartner® Magic Quadrant™ for Artificial Intelligence Applications in IT Service Management

Mobile and Voice Are Coming to Harvey – Artificial Lawyer

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Tencent Hunyuan

Tencent Hunyuan Releases and Open Sources Image Model 2.1, Supporting Native 2K High-Quality Images_the_model_This

By Advanced AI EditorSeptember 10, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


On the night of September 9, Tencent released and open-sourced the latest image model “Hunyuan Image 2.1”. This model boasts industry-leading capabilities and supports native 2K high-definition images.

After its open-source release, the Hunyuan Image 2.1 model quickly climbed the popularity charts on Hugging Face, becoming the third most popular model globally. Among the top eight models on the list, the Tencent Hunyuan model family occupies three spots.

At the same time, the Tencent Hunyuan team revealed that they will soon release a native multimodal image generation model.

Hunyuan Image 2.1 is a comprehensive upgrade based on the 2.0 architecture, focusing more on balancing generation effects and performance. The new version supports native input in both Chinese and English and can generate high-quality outputs from complex semantics in both languages. Furthermore, there have been significant improvements in the overall aesthetic quality of generated images and the diversity of applicable scenarios.

This means that designers, illustrators, and other visual creators can more efficiently and conveniently translate their ideas into images. Whether generating high-fidelity creative illustrations, producing posters and packaging designs with Chinese and English slogans, or creating complex four-panel comics and graphic novels, Hunyuan Image 2.1 can provide fast, high-quality support for creators.

Hunyuan Image 2.1 is a fully open-source foundational model that not only offers industry-leading generation effects but is also flexible enough to adapt to the diverse derivative needs of the community. Currently, the model weights and code for Hunyuan Image 2.1 have been officially released in open-source communities such as Hugging Face and GitHub, allowing individual and corporate developers to conduct research or develop various derivative models and plugins based on this foundational model.

Thanks to a larger-scale dataset for text-image alignment, Hunyuan Image 2.1 has made significant advancements in complex semantic understanding and cross-domain generalization capabilities. It supports prompts of up to 1000 tokens, allowing for precise generation of scene details, character expressions, and actions, enabling distinct deions and controls for multiple objects. Additionally, Hunyuan Image 2.1 can finely control text within images, ensuring that textual information naturally integrates with visuals.

Highlight 1 of Hunyuan Image 2.1: Strong capability in understanding complex semantics, supporting distinct deions and precise generation of multiple subjects.

Highlight 2 of Hunyuan Image 2.1: More stable control over text and scene details within images.

Tencent’s Hunyuan Image Model 2.1 is at the SOTA level among open-source models.

According to the evaluation results from SSAE (Structured Semantic Alignment Evaluation), Tencent’s Hunyuan Image Model 2.1 has achieved optimal performance in semantic alignment among open-source models and is very close to the performance of closed-source commercial models (GPT-Image).

Meanwhile, the GSB (Good Same Bad) evaluation results indicate that the image generation quality of Hunyuan Image 2.1 is comparable to the closed-source commercial model Seedream 3.0, while slightly outperforming similar open-source models like Qwen-Image.

The Hunyuan Image 2.1 model not only utilizes massive training data but also employs structured, varied-length, and diverse content captions, significantly enhancing its understanding of textual deions. The caption model incorporates OCR and IP RAG expert models, effectively improving its response capabilities for complex text recognition and world knowledge.

To greatly reduce computational load and enhance training and inference efficiency, the model employs a VAE with a 32-fold ultra-high compression ratio and uses dinov2 alignment and repa loss to simplify training difficulties. As a result, the model can efficiently generate native 2K images.

In terms of text encoding, Hunyuan Image 2.1 is equipped with dual text encoders: one MLLM module to further enhance text-image alignment capabilities, and another ByT5 model to boost text generation expressiveness. The overall architecture consists of a 17B parameter single/dual-flow DiT model.

Additionally, Hunyuan Image 2.1 addresses the training stability issues of average flow models (meanflow) at the 17B parameter level, reducing the number of inference steps from 100 to 8, significantly improving inference speed while ensuring the model’s original effectiveness.

The simultaneously open-sourced Hunyuan text rewriting model (PromptEnhancer) is the industry’s first systematic, industrial-grade Chinese-English rewriting model, capable of structurally optimizing user text instructions and enriching visual expressions, greatly enhancing the semantic representation of images generated from the rewritten text.

Tencent Hunyuan continues to delve into the field of image generation, having previously released the first open-source Chinese native DiT architecture image model—Hunyuan DiT—and the industry’s first commercial-grade real-time image model—Hunyuan Image 2.0. The newly launched native 2K model Hunyuan Image 2.1 strikes a better balance between effect and performance, meeting the diverse needs of users and enterprises in various visual scenarios.

At the same time, Tencent Hunyuan is firmly embracing open-source, gradually releasing various sized language models, complete multimodal generation capabilities and toolset plugins for images, videos, and 3D, providing an open-source foundation close to commercial model performance. The total number of image and video derivative models has reached 3000, and the Hunyuan 3D series model community download count has exceeded 2.3 million, making it the most popular 3D open-source model globally.返回搜狐,查看更多



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleTask orders and bottlenecks: how the largest US shipbuilder is putting AI to work
Next Article Jim Cramer Discusses D-Wave and Prefers IBM
Advanced AI Editor
  • Website

Related Posts

Tencent Hunyuan Releases and Open Sources Image Model 2.1, Supporting Native 2K Images_the_model_being

September 10, 2025

World No. 1! Tencent’s Hunyuan Translation Model Hunyuan-MT-7B Tops Open Source Rankings_model_the_along

September 7, 2025

AI Model Learns to ‘Act Accordingly’, Opening a New Era of Adaptive AI_model_The_this

September 7, 2025

Comments are closed.

Latest Posts

Growing Support for Parthenon Marbles’ Return to Greece, More Art News

Leon Black and Leslie Wexner’s Letters to Jeffrey Epstein Released

School of Visual Arts Transfers Ownership to Nonprofit Alumni Society

Cristin Tierney Moves Gallery to Tribeca for 15th Anniversary Exhibition

Latest Posts

Perplexity AI tables $34.5 billion cash bid for Google’s Chrome amid Antitrust pressure

September 10, 2025

Moveworks Recognized as a Challenger in the 2025 Gartner® Magic Quadrant™ for Artificial Intelligence Applications in IT Service Management

September 10, 2025

Mobile and Voice Are Coming to Harvey – Artificial Lawyer

September 10, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Perplexity AI tables $34.5 billion cash bid for Google’s Chrome amid Antitrust pressure
  • Moveworks Recognized as a Challenger in the 2025 Gartner® Magic Quadrant™ for Artificial Intelligence Applications in IT Service Management
  • Mobile and Voice Are Coming to Harvey – Artificial Lawyer
  • Growing Support for Parthenon Marbles’ Return to Greece, More Art News
  • Staying in the Sweet Spot: Responsive Reasoning Evolution via Capability-Adaptive Hint Scaffolding – Takara TLDR

Recent Comments

  1. 58WIN on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. RobertDip on Foundation AI: Cisco launches AI model for integration in security applications
  3. ArlieSlomE on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. Lewiszix on Foundation AI: Cisco launches AI model for integration in security applications
  5. buôn bán nội tạng on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.