View a PDF of the paper titled EmoGene: Audio-Driven Emotional 3D Talking-Head Generation, by Wenqing Wang and Yun Fu
View PDF
HTML (experimental)
Abstract:Audio-driven talking-head generation is a crucial and useful technology for virtual human interaction and film-making. While recent advances have focused on improving image fidelity and lip synchronization, generating accurate emotional expressions remains underexplored. In this paper, we introduce EmoGene, a novel framework for synthesizing high-fidelity, audio-driven video portraits with accurate emotional expressions. Our approach employs a variational autoencoder (VAE)-based audio-to-motion module to generate facial landmarks, which are concatenated with emotional embedding in a motion-to-emotion module to produce emotional landmarks. These landmarks drive a Neural Radiance Fields (NeRF)-based emotion-to-video module to render realistic emotional talking-head videos. Additionally, we propose a pose sampling method to generate natural idle-state (non-speaking) videos for silent audio inputs. Extensive experiments demonstrate that EmoGene outperforms previous methods in generating high-fidelity emotional talking-head videos.
Submission history
From: Wenqing Wang [view email]
[v1]
Mon, 7 Oct 2024 08:23:05 UTC (4,391 KB)
[v2]
Thu, 1 May 2025 21:31:16 UTC (3,782 KB)