Skip to content

Latest commit

 

History

History
99 lines (69 loc) · 9.97 KB

README.md

File metadata and controls

99 lines (69 loc) · 9.97 KB

Awesome Face Reenactment

Survey

  • The Creation and Detection of Deepfakes: A Survey (arXiv 2020) [paper]
  • DeepFakes and Beyond: A Survey of Face Manipulation and Fake Detection (arXiv 2020) [paper]
  • A Review on Face Reenactment Techniques (I4Tech 2020) [paper]
  • What comprises a good talking-head video generation?: A Survey and Benchmark (arXiv 2020) [paper]
  • Deep Audio-Visual Learning: A Survey (arXiv 2020) [paper]

Papers

2021

  • AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis (ICCV, 2021) [paper]
  • PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering (ICCV, 2021) [paper]
  • Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion (IJCAI, 2021) [paper]
  • Flow-Guided One-Shot Talking Face Generation With a High-Resolution Audio-Visual Dataset (CVPR, 2021) [paper]
  • Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation (CVPR, 2021) [paper] [code]
  • Audio-Driven Emotional Video Portraits (CVPR, 2021) [paper] [code]
  • Everything's Talkin': Pareidolia Face Reenactment (CVPR, 2021) [paper]
  • LI-Net: Large-Pose Identity-Preserving Face Reenactment Network (ICME, 2021) [paper]
  • One-shot Face Reenactment Using Appearance Adaptive Normalization (AAAI, 2021) [paper]
  • APB2FaceV2: Real-Time Audio-Guided Multi-Face Reenactment (ICASSP, 2021) [paper] [code]

2020

  • MEAD: A Large-Scale Audio-Visual Dataset for Emotional Talking-Face Generation (ECCV, 2020) [paper] [code]
  • One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing (arXiv, 2020) [paper]
  • FACEGAN: Facial Attribute Controllable rEenactment GAN (WACV, 2020) [paper]
  • LandmarkGAN: Synthesizing Faces from Landmarks (arXiv, 2020) [paper]
  • Fast Bi-layer Neural Synthesis of One-Shot Realistic Head Avatars (ECCV, 2020) [paper] [code]
  • A Lip Sync Expert Is All You Need for Speech to Lip Generation In The Wild (MM, 2020) [paper] [code]
  • Mesh Guided One-shot Face Reenactment using Graph Convolutional Networks (MM, 2020) [paper]
  • Arbitrary Talking Face Generation via Attentional Audio-Visual Coherence Learning (IJCAI, 2020) [paper]
  • APB2Face: Audio-guided face reenactment with auxiliary pose and blink signals (ICASSP, 2020) [paper] [code]
  • MakeItTalk: Speaker-Aware Talking Head Animation (SIGGRAPH ASIA, 2020) [paper] [code]
  • Learning Identity-Invariant Motion Representations for Cross-ID Face Reenactment (CVPR, 2020) [paper]
  • ReenactNet: Real-time Full Head Reenactment (arXiv, 2020) [paper]
  • FReeNet: Multi-Identity Face Reenactment (CVPR, 2020) [paper] [code]
  • FaR-GAN for One-Shot Face Reenactment (CVPRW, 2020) [paper]
  • One-Shot Identity-Preserving Portrait Reenactment (, 2020) [paper]
  • Neural Head Reenactment with Latent Pose Descriptors (CVPR, 2020) [paper] [code]
  • ActGAN: Flexible and Efficient One-shot Face Reenactment (IWBF, 2020) [paper]
  • Realistic Face Reenactment via Self-Supervised Disentangling of Identity and Pose (AAAI, 2020) [paper]
  • First Order Motion Model for Image Animation (NIPS, 2020) [paper] [code]
  • Everybody’s Talkin’: Let Me Talk as You Want (arXiv, 2020) [paper]

2019

  • FLNet: Landmark Driven Fetching and Learning Network for Faithful Talking Facial Animation Synthesis (AAAI, 2019) [paper]
  • MarioNETte: Few-shot Face Reenactment Preserving Identity of Unseen Targets (AAAI, 2019) [paper]
  • Talking Face Generation by Adversarially Disentangled Audio-Visual Representation(AAAI, 2019) [paper]
  • Any-to-one Face Reenactment Based on Conditional Generative Adversarial Network (APSIPA, 2019) [paper]
  • Towards Automatic Face-to-Face Translation (MM, 2019) [paper] [code]
  • Few-Shot Adversarial Learning of Realistic Neural Talking Head Models (ICCV, 2019) [paper]
  • Make a Face: Towards Arbitrary High Fidelity Face Manipulation (ICCV, 2019) [paper]
  • One-shot Face Reenactment (BMVC, 2019) [paper] [code]
  • Deferred neural rendering: image synthesis using neural textures (TOG, 2019) [paper]
  • Learning the Face Behind a Voice (CVPR, 2019) [paper] [code]
  • Hierarchical Cross-Modal Talking Face Generation with Dynamic Pixel-Wise Loss (CVPR, 2019) [paper] [code]
  • Animating Arbitrary Objects via Deep Motion Transfer (CVPR, 2019) [paper] [code]
  • Wav2Pix: Speech-conditioned Face Generation using Generative Adversarial Networks (ICASSP, 2019) [paper] [code]
  • Face Reconstruction from Voice using Generative Adversarial Networks (NIPS, 2019) [paper]

2018

  • GANimation: Anatomically-aware Facial Animation from a Single Image (ECCV, 2018) [paper] [code]
  • ReenactGAN: Learning to Reenact Faces via Boundary Transfer (ECCV, 2018) [paper] [code]
  • Lip movements generation at a glance (ECCV, 2018) [paper]
  • Deep Video Portraits (SIGGRAPH, 2018) [paper]
  • X2Face: A Network for Controlling Face Generation Using Images, Audio, and Pose Codes (ECCV, 2018) [paper] [code]

2017

  • Synthesizing Obama: learning lip sync from audio (TOG, 2017) [paper]
  • You said that? (BMVC, 2017) [paper] [code]

2016

  • Face2Face: Real-time Face Capture and Reenactment of RGB Videos (CVPR, 2016) [paper]

Feel free to contact me if you find any interesting paper is missing.

Acknowledgements

We thank for the template used by Awesome Incremental Learning / Lifelong learning