Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/antgroup/echomimic
EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning
https://github.com/antgroup/echomimic
audio-driven-portrait-animations audio-driven-talking-face human-animation talking-face-generation talking-head
Last synced: 1 day ago
JSON representation
EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning
- Host: GitHub
- URL: https://github.com/antgroup/echomimic
- Owner: antgroup
- License: apache-2.0
- Created: 2024-07-03T04:06:35.000Z (5 months ago)
- Default Branch: main
- Last Pushed: 2024-11-21T06:13:58.000Z (22 days ago)
- Last Synced: 2024-11-26T07:02:37.474Z (17 days ago)
- Topics: audio-driven-portrait-animations, audio-driven-talking-face, human-animation, talking-face-generation, talking-head
- Language: Python
- Homepage: https://antgroup.github.io/ai/echomimic/
- Size: 21.7 MB
- Stars: 3,050
- Watchers: 46
- Forks: 356
- Open Issues: 94
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- ai-game-devtools - EchoMimic - Driven Portrait Animations through Editable Landmark Conditions. |[arXiv](https://arxiv.org/abs/2407.08136) | | Avatar | (<span id="avatar">Avatar</span> / <span id="tool">Tool (AI LLM)</span>)
README
EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning
*Equal Contribution.
Terminal Technology Department, Alipay, Ant Group.
## π EchoMimic Series
* EchoMimicV1: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning. [GitHub](https://github.com/antgroup/echomimic)
* EchoMimicV2: Towards Striking, Simplified, and Semi-Body Human Animation. [GitHub](https://github.com/antgroup/echomimic_v2)## π£ Updates
* [2024.12.10] π₯ EchoMimic is accepted by AAAI 2025.
* [2024.11.21] π₯π₯π₯ We release our [EchoMimicV2](https://github.com/antgroup/echomimic_v2) codes and models.
* [2024.08.02] π₯ EchoMimic is now available on [huggingface](https://huggingface.co/spaces/BadToBest/EchoMimic) with A100 GPU. Thanks Wenmeng Zhou@ModelScope.
* [2024.07.25] π₯π₯π₯ Accelerated models and pipe on **Audio Driven** are released. The inference speed can be improved by **10x** (from ~7mins/240frames to ~50s/240frames on V100 GPU)
* [2024.07.23] π₯ EchoMimic gradio demo on [modelscope](https://www.modelscope.cn/studios/BadToBest/BadToBest) is ready.
* [2024.07.23] π₯ EchoMimic gradio demo on [huggingface](https://huggingface.co/spaces/fffiloni/EchoMimic) is ready. Thanks Sylvain Filoni@fffiloni.
* [2024.07.17] π₯π₯π₯ Accelerated models and pipe on **Audio + Selected Landmarks** are released. The inference speed can be improved by **10x** (from ~7mins/240frames to ~50s/240frames on V100 GPU)
* [2024.07.14] π₯ [ComfyUI](https://github.com/smthemex/ComfyUI_EchoMimic) is now available. Thanks @smthemex for the contribution.
* [2024.07.13] π₯ Thanks [NewGenAI](https://www.youtube.com/@StableAIHub) for the [video installation tutorial](https://www.youtube.com/watch?v=8R0lTIY7tfI).
* [2024.07.13] π₯ We release our pose&audio driven codes and models.
* [2024.07.12] π₯ WebUI and GradioUI versions are released. We thank @greengerong @Robin021 and @O-O1024 for their contributions.
* [2024.07.12] π₯ Our [paper](https://arxiv.org/abs/2407.08136) is in public on arxiv.
* [2024.07.09] π₯ We release our audio driven codes and models.## π Gallery
### Audio Driven (Sing)
### Audio Driven (English)
### Audio Driven (Chinese)
### Landmark Driven
### Audio + Selected Landmark Driven
**οΌSome demo images above are sourced from image websites. If there is any infringement, we will immediately remove them and apologize.οΌ**
## βοΈ Installation
### Download the Codes
```bash
git clone https://github.com/BadToBest/EchoMimic
cd EchoMimic
```### Python Environment Setup
- Tested System Environment: Centos 7.2/Ubuntu 22.04, Cuda >= 11.7
- Tested GPUs: A100(80G) / RTX4090D (24G) / V100(16G)
- Tested Python Version: 3.8 / 3.10 / 3.11Create conda environment (Recommended):
```bash
conda create -n echomimic python=3.8
conda activate echomimic
```Install packages with `pip`
```bash
pip install -r requirements.txt
```### Download ffmpeg-static
Download and decompress [ffmpeg-static](https://www.johnvansickle.com/ffmpeg/old-releases/ffmpeg-4.4-amd64-static.tar.xz), then
```
export FFMPEG_PATH=/path/to/ffmpeg-4.4-amd64-static
```### Download pretrained weights
```shell
git lfs install
git clone https://huggingface.co/BadToBest/EchoMimic pretrained_weights
```The **pretrained_weights** is organized as follows.
```
./pretrained_weights/
βββ denoising_unet.pth
βββ reference_unet.pth
βββ motion_module.pth
βββ face_locator.pth
βββ sd-vae-ft-mse
β βββ ...
βββ sd-image-variations-diffusers
β βββ ...
βββ audio_processor
βββ whisper_tiny.pt
```In which **denoising_unet.pth** / **reference_unet.pth** / **motion_module.pth** / **face_locator.pth** are the main checkpoints of **EchoMimic**. Other models in this hub can be also downloaded from it's original hub, thanks to their brilliant works:
- [sd-vae-ft-mse](https://huggingface.co/stabilityai/sd-vae-ft-mse)
- [sd-image-variations-diffusers](https://huggingface.co/lambdalabs/sd-image-variations-diffusers)
- [audio_processor(whisper)](https://openaipublic.azureedge.net/main/whisper/models/65147644a518d12f04e32d6f3b26facc3f8dd46e5390956a9424a650c0ce22b9/tiny.pt)### Audio-Drived Algo Inference
Run the python inference script:```bash
python -u infer_audio2vid.py
python -u infer_audio2vid_pose.py
```### Audio-Drived Algo Inference On Your Own Cases
Edit the inference config file **./configs/prompts/animation.yaml**, and add your own case:
```bash
test_cases:
"path/to/your/image":
- "path/to/your/audio"
```The run the python inference script:
```bash
python -u infer_audio2vid.py
```### Motion Alignment between Ref. Img. and Driven Vid.
(Firstly download the checkpoints with '_pose.pth' postfix from huggingface)
Edit driver_video and ref_image to your path in demo_motion_sync.py, then run
```bash
python -u demo_motion_sync.py
```### Audio&Pose-Drived Algo Inference
Edit ./configs/prompts/animation_pose.yaml, then run
```bash
python -u infer_audio2vid_pose.py
```### Pose-Drived Algo Inference
Set draw_mouse=True in line 135 of infer_audio2vid_pose.py. Edit ./configs/prompts/animation_pose.yaml, then run
```bash
python -u infer_audio2vid_pose.py
```### Run the Gradio UI
Thanks to the contribution from @Robin021:
```bash
python -u webgui.py --server_port=3000
```
## π Release Plans
| Status | Milestone | ETA |
|:--------:|:-------------------------------------------------------------------------|:--:|
| β | The inference source code of the Audio-Driven algo meet everyone on GitHub | 9th July, 2024 |
| β | Pretrained models trained on English and Mandarin Chinese to be released | 9th July, 2024 |
| β | The inference source code of the Pose-Driven algo meet everyone on GitHub | 13th July, 2024 |
| β | Pretrained models with better pose control to be released | 13th July, 2024 |
| β | Accelerated models to be released | 17th July, 2024 |
| π | Pretrained models with better sing performance to be released | TBD |
| π | Large-Scale and High-resolution Chinese-Based Talking Head Dataset | TBD |## βοΈ Disclaimer
This project is intended for academic research, and we explicitly disclaim any responsibility for user-generated content. Users are solely liable for their actions while using the generative model. The project contributors have no legal affiliation with, nor accountability for, users' behaviors. It is imperative to use the generative model responsibly, adhering to both ethical and legal standards.## ππ» Acknowledgements
We would like to thank the contributors to the [AnimateDiff](https://github.com/guoyww/AnimateDiff), [Moore-AnimateAnyone](https://github.com/MooreThreads/Moore-AnimateAnyone) and [MuseTalk](https://github.com/TMElyralab/MuseTalk) repositories, for their open research and exploration.
We are also grateful to [V-Express](https://github.com/tencent-ailab/V-Express) and [hallo](https://github.com/fudan-generative-vision/hallo) for their outstanding work in the area of diffusion-based talking heads.
If we missed any open-source projects or related articles, we would like to complement the acknowledgement of this specific work immediately.
## π Citation
If you find our work useful for your research, please consider citing the paper :
```
@misc{chen2024echomimic,
title={EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning},
author={Zhiyuan Chen, Jiajiong Cao, Zhiquan Chen, Yuming Li, Chenguang Ma},
year={2024},
eprint={2407.08136},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```## π Star History
[![Star History Chart](https://api.star-history.com/svg?repos=antgroup/echomimic&type=Date)](https://star-history.com/#antgroup/echomimic&Date)