Audio-driven talking-head synthesis is a popular research topic for virtual human-related applications. However, the inflexibility and inefficiency of existing methods, which necessitate expensive end-to-end training to transfer emotions from guidance videos to talking-head predictions, are significant limitations.
In this work, we propose the Emotional Adaptation for Audio-driven Talking-head (EAT) method, which transforms emotion-agnostic talking-head models into emotion-controllable ones in a cost-effective and efficient manner through parameter-efficient adaptations. Our approach utilizes a pretrained emotion-agnostic talking-head transformer and introduces three lightweight adaptations (the Deep Emotional Prompts, Emotional Deformation Network, and Emotional Adaptation Module) from different perspectives to enable precise and realistic emotion controls.
Our experiments demonstrate that our approach achieves state-of-the-art performance on widely-used benchmarks, including LRW and MEAD. Additionally, our parameter-efficient adaptations exhibit remarkable generalization ability, even in scenarios where emotional training videos are scarce or nonexistent.
Please refer to our paper for more details.
We compare EAT with recent works in emotional talking-head generation.
EAT can generate eight kinds of emotional talking-head.
EAT can also generate talking-head by interpolating the latent embedding between different emotions. Use the slider here to linearly interpolate between the left emotion and the right emotion.
Contempt
Sad
Using EAT and pre-trained CLIP model, you can edit the expression of a talking-head video with a text.
@InProceedings{Gan_2023_ICCV,
author = {Gan, Yuan and Yang, Zongxin and Yue, Xihang and Sun, Lingyun and Yang, Yi},
title = {Efficient Emotional Adaptation for Audio-Driven Talking-Head Generation},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2023},
pages = {22634-22645}
}