Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/maum-ai/faceshifter
Unofficial PyTorch Implementation for FaceShifter (https://arxiv.org/abs/1912.13457)
https://github.com/maum-ai/faceshifter
face-swapping pytorch pytorch-lightning
Last synced: 3 days ago
JSON representation
Unofficial PyTorch Implementation for FaceShifter (https://arxiv.org/abs/1912.13457)
- Host: GitHub
- URL: https://github.com/maum-ai/faceshifter
- Owner: maum-ai
- License: bsd-3-clause
- Created: 2020-10-16T05:30:41.000Z (about 4 years ago)
- Default Branch: master
- Last Pushed: 2021-12-03T07:46:38.000Z (almost 3 years ago)
- Last Synced: 2023-10-19T23:48:39.374Z (about 1 year ago)
- Topics: face-swapping, pytorch, pytorch-lightning
- Language: Python
- Homepage:
- Size: 96.8 MB
- Stars: 556
- Watchers: 19
- Forks: 110
- Open Issues: 12
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# FaceShifter — Unofficial PyTorch Implementation
![](./assets/teaser_v8.jpg)
![](./assets/deepfake_method_stage1_v8.png)
![issueBadge](https://img.shields.io/github/issues/mindslab-ai/faceshifter) ![starBadge](https://img.shields.io/github/stars/mindslab-ai/faceshifter) ![repoSize](https://img.shields.io/github/repo-size/mindslab-ai/faceshifter) ![lastCommit](https://img.shields.io/github/last-commit/mindslab-ai/faceshifter)Unofficial Implementation of [FaceShifter: Towards High Fidelity And Occlusion Aware Face Swapping](https://arxiv.org/abs/1912.13457) with [Pytorch-Lightning](https://github.com/PyTorchLightning/pytorch-lightning).
In the paper, there are two networks for full pipe-line, AEI-Net and HEAR-Net. We only implement the AEI-Net, which is main network for face swapping.### Take a look [HifiFace](https://github.com/mindslab-ai/hififace), our implementation of a more recent face swapping model.
## Datasets
### Preparing Data
You need to download and unzip:
- [FFHQ](https://github.com/NVlabs/ffhq-dataset)
- CelebA-HQ ([Unofficial Download Script](https://github.com/suvojit-0x55aa/celebA-HQ-dataset-download))
- VGGFace ([Unofficial Download Script](https://github.com/ndaidong/vgg-faces-utils))### Preprocess Data
Preprocessing code is mainly based on [Nvidia's FFHQ preprocessing code](https://github.com/NVlabs/ffhq-dataset/blob/bb67086731d3bd70bc58ebee243880403726197a/download_ffhq.py#L259-L349).
You may modify our [preprocess](./preprocess) with multi-processing functions to finish pre-processing step much faster.
```bash
# build docker image from Dockerfile
docker build -t dlib:0.0 ./preprocess
# run docker container from image
docker run -itd --ipc host -v /PATH_TO_THIS_FOLDER/preprocess:/workspace -v /PATH_TO_THE_DATA:/DATA -v /PATH_TO_SAVE_DATASET:/RESULT --name dlib --tag dlib:0.0
# attach
docker attach dlib
# preprocess with dlib
python preprocess.py --root /DATA --output_dir /RESULT
```## Training
### Configuration
There is `yaml` file in the [`config`](./config) folder.
They **must** be edited to match your training requirements (dataset, metadata, etc.).- [`config/train.yaml`](./config/train.yaml): Configs for training AEI-Net.
- Fill in the blanks of: `dataset_dir`, `valset_dir`
- You may want to change: `batch_size` for GPUs other than 32GB V100, or `chkpt_dir` to save checkpoints in other disk.
### Using Docker
We provide a Dockerfile for easier training environment setup.
```bash
docker build -t faceshifter:0.0 .
docker run -itd --ipc host --gpus all -v /PATH_TO_THIS_FOLDER:/workspace -v /PATH_TO_DATASET:/DATA --name FS --tag faceshifter:0.0
docker attach FS
```### Pre-trained Arcface
During the training process, pre-trained [Arcface](https://openaccess.thecvf.com/content_CVPR_2019/html/Deng_ArcFace_Additive_Angular_Margin_Loss_for_Deep_Face_Recognition_CVPR_2019_paper.html)
is required. We provide our pre-trained Arcface model; you can download at [this](https://drive.google.com/file/d/1TAb6WNfusbL2Iv3tfRCpMXimZE9tnSUn/view?usp=sharing) link### Command
To train the AEI-Net, run this command:```bash
python aei_trainer.py -c -g -n
# example command that might help you understand the arguments:
# train from scratch with name "my_runname"
python aei_trainer.py -c config/train.yaml -g 0 -n my_runname
```Optionally, you can resume the training from previously saved checkpoint by adding `-p ` argument.
### Monitoring via Tensorboard
The progress of training with loss values and validation output can be monitored with Tensorboard.
By default, the logs will be stored at `log`, which can be modified by editing `log.log_dir` parameter at config yaml file.```bash
tensorboard --log_dir log --bind_all # Scalars, Images, Hparams, Projector will be shown.
```## Inference
To inference the AEI-Net, run this command:
```bash
python aei_inference.py --checkpoint_path --target_image --source_image --output_path --gpu_num
# example command that might help you understand the arguments:
# train from scratch with name "my_runname"
python aei_inference.py --checkpoint_path chkpt/my_runname/epoch=0.ckpt --target_image target.png --source_image source.png --output_path output.png --gpu_num 0
```We probived [colab example](https://colab.research.google.com/drive/1M99jX_nhZ74j_jdYIDtTvDKEE-XVoQnn?usp=sharing). You can use it with your own trained weight.
## Results
![](assets/jobs2cook.gif)
### Comparison with results from original paper
#### Figure in the original paper
![](assets/wild_v6.jpg)
#### Our Results
![](assets/our_results.png)Reminds you that we __only implement the AEI-Net,__ and the results in the original paper were generated by AEI-Net and HEAR-Net.
We will soon release the FaceShifter in our cloud API service, [maum.ai](https://maum.ai/?lang=en)
## License
[BSD 3-Clause License](https://opensource.org/licenses/BSD-3-Clause).
## Implementation Author
Changho Choi @ MINDs Lab, Inc. ([email protected])
## Paper Information
```bibtex
@article{li2019faceshifter,
title={Faceshifter: Towards high fidelity and occlusion aware face swapping},
author={Li, Lingzhi and Bao, Jianmin and Yang, Hao and Chen, Dong and Wen, Fang},
journal={arXiv preprint arXiv:1912.13457},
year={2019}
}
```