View a PDF of the paper titled UniPose: A Unified Multimodal Framework for Human Pose Comprehension, Generation and Editing, by Yiheng Li and 4 other authors
View PDF
HTML (experimental)
Abstract:Human pose plays a crucial role in the digital age. While recent works have achieved impressive progress in understanding and generating human poses, they often support only a single modality of control signals and operate in isolation, limiting their application in real-world scenarios. This paper presents UniPose, a framework employing Large Language Models (LLMs) to comprehend, generate, and edit human poses across various modalities, including images, text, and 3D SMPL poses. Specifically, we apply a pose tokenizer to convert 3D poses into discrete pose tokens, enabling seamless integration into the LLM within a unified vocabulary. To further enhance the fine-grained pose perception capabilities, we facilitate UniPose with a mixture of visual encoders, among them a pose-specific visual encoder. Benefiting from a unified learning strategy, UniPose effectively transfers knowledge across different pose-relevant tasks, adapts to unseen tasks, and exhibits extended capabilities. This work serves as the first attempt at building a general-purpose framework for pose comprehension, generation, and editing. Extensive experiments highlight UniPose’s competitive and even superior performance across various pose-relevant tasks.
Submission history
From: YiHeng Li [view email]
[v1]
Mon, 25 Nov 2024 08:06:30 UTC (4,896 KB)
[v2]
Sat, 29 Mar 2025 03:35:20 UTC (5,210 KB)