The Tencent Hunyuan team recently announced its latest breakthrough in the field of AI painting, significantly enhancing the quality of model-generated images and their alignment with human preferences through optimized fine-tuning paradigms. This technological innovation converges in just 10 minutes of training on 32 H20 GPUs and has achieved up to a 300% increase in human evaluation scores, attracting widespread attention in the industry.
Challenges and Breakthroughs in AI Painting Fine-Tuning
Currently, although diffusion models have made remarkable progress in image generation, they still face two major challenges. First, existing optimization methods are often limited by the optimization steps, leading to the phenomenon of “reward hacking,” where models generate lower quality images to achieve high scores. Second, achieving the desired aesthetic effect usually requires offline adjustments to the reward model, which limits the model’s flexibility. To address these issues, the Tencent Hunyuan team has proposed two key methods: Direct-Align and Semantic Relative Preference Optimization (SRPO).
Direct-Align: Optimizing Across the Entire Diffusion Trajectory
The core of the Direct-Align method is the pre-injection of noise, allowing for the recovery of the original image from any time step. This approach avoids the gradient explosion issue encountered in early time steps with traditional methods, enabling the model to optimize across the entire diffusion trajectory rather than being limited to the later steps of the diffusion process. Experimental results indicate that even in the very early stage of denoising, with only 5% progress, Direct-Align can recover a rough structure of the image. This capability greatly reduces the likelihood of “reward hacking” and enhances the overall quality of the images generated by the model.
SRPO: Making Reward Signals Smarter
SRPO (Semantic Relative Preference Optimization) is another highlight of this technological update. Traditional reward models often require multiple models to balance different preferences, but the Hunyuan team found that this did not truly align the optimization direction. SRPO redefines the reward as a text-conditioned signal, calculating the relative difference as an optimization target by applying both positive and negative prompts to the same image. This method allows for online reward adjustments without needing additional data to flexibly adapt to various requirements. For example, by adding control words like “Realistic photo,” the model’s generated images can achieve approximately 3.7 times more realism and a 3.1 times increase in aesthetic quality. Furthermore, SRPO can implement various style adjustments, such as brightness control and comic style transformation, through simple prompts, significantly expanding the model’s application scope.
Experimental Results and Future Outlook
In experiments conducted on the FLUX.1-dev model, SRPO achieved the best results across multiple evaluation metrics, including both automated and manual assessments. In the HPDv2 benchmark test, SRPO improved its excellence rates for realism and aesthetic quality to 38.9% and 40.5%, respectively, with an overall preference rate reaching 29.4%. Notably, after just 10 minutes of SRPO training, the performance of FLUX.1-dev on the HPDv2 benchmark has already surpassed that of the latest open-source version, FLUX.1.Krea. This research achievement showcases the technical strength of the Tencent Hunyuan team in the field of AI painting and provides new insights for the development of AI paintingtechnology.
This technological breakthrough not only enhances the quality of AI-generated images but also offers broader possibilities for the application of AIin the field of artistic creation. Do you think this semantic-based relative preference optimization method will become a mainstream trend in future AI paintingmodels?
返回搜狐,查看更多
平台声明:该文观点仅代表作者本人,搜狐号系信息发布平台,搜狐仅提供信息存储空间服务。