Efficient Emotional Adaptation for Audio-Driven Talking-Head Generation

(EAT )

Yuan Gan1,2, Zongxin Yang1,2, Xihang Yue1,2, Lingyun Sun2, Yi Yang1,2,
1ReLER, CCAI, Zhejiang University
2College of Computer Science and Technolog, Zhejiang University

EAT generates emotional talking-head videos with input audio, head pose and source image.

Abstract

Audio-driven talking-head synthesis is a popular research topic for virtual human-related applications. However, the inflexibility and inefficiency of existing methods, which necessitate expensive end-to-end training to transfer emotions from guidance videos to talking-head predictions, are significant limitations.

In this work, we propose the Emotional Adaptation for Audio-driven Talking-head (EAT) method, which transforms emotion-agnostic talking-head models into emotion-controllable ones in a cost-effective and efficient manner through parameter-efficient adaptations. Our approach utilizes a pretrained emotion-agnostic talking-head transformer and introduces three lightweight adaptations (the Deep Emotional Prompts, Emotional Deformation Network, and Emotional Adaptation Module) from different perspectives to enable precise and realistic emotion controls.

Our experiments demonstrate that our approach achieves state-of-the-art performance on widely-used benchmarks, including LRW and MEAD. Additionally, our parameter-efficient adaptations exhibit remarkable generalization ability, even in scenarios where emotional training videos are scarce or nonexistent.

Overview

Overview of EAT
Overview of our EAT, (a) The Audio-to-Expression Transformer (A2ET) transfers latent source image representation, source audio, and head pose sequences to 3D expression deformation. (b) The emotional guidance is injected into A2ET, Emotional Deformation Network (EDN), and Emotional Adaptation Module (EAM) for emotional talking-head generation, presented in dashed lines. (c) The RePos-Net takes the 3D source keypoints and driven keypoints to generate frames.

Please refer to our paper for more details.

Quality Comparison

We compare EAT with recent works in emotional talking-head generation.

Eight Emotional Results

EAT can generate eight kinds of emotional talking-head.

Interpolating States

EAT can also generate talking-head by interpolating the latent embedding between different emotions. Use the slider here to linearly interpolate between the left emotion and the right emotion.

Contempt

Loading...

Sad


Zero-Shot Editting

Using EAT and pre-trained CLIP model, you can edit the expression of a talking-head video with a text.

BibTeX

@InProceedings{Gan_2023_ICCV,
        author    = {Gan, Yuan and Yang, Zongxin and Yue, Xihang and Sun, Lingyun and Yang, Yi},
        title     = {Efficient Emotional Adaptation for Audio-Driven Talking-Head Generation},
        booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
        month     = {October},
        year      = {2023},
        pages     = {22634-22645}
    }