https://github.com/t04glovern/icface
Repository cleanup of https://github.com/Blade6570/icface while I play around with it.
https://github.com/t04glovern/icface
expression faceswap facial gan pytorch
Last synced: 4 months ago
JSON representation
Repository cleanup of https://github.com/Blade6570/icface while I play around with it.
- Host: GitHub
- URL: https://github.com/t04glovern/icface
- Owner: t04glovern
- Created: 2019-04-13T16:48:47.000Z (over 6 years ago)
- Default Branch: master
- Last Pushed: 2019-04-14T13:38:49.000Z (over 6 years ago)
- Last Synced: 2025-06-27T09:43:29.845Z (4 months ago)
- Topics: expression, faceswap, facial, gan, pytorch
- Language: Python
- Homepage: https://devopstar.com/2019/04/14/exploring-interpretable-and-controllable-face-reenactment-icface/
- Size: 2.66 MB
- Stars: 6
- Watchers: 1
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# ICface
Repository cleanup of [Blade6570/icface](https://github.com/Blade6570/icface) while I play around with it.
## Setup
The general requirements are as follows:
- Python 3.7
- Pytorch 0.4.1.post2
- Visdom and dominate
- Natsort### Pretrained Model
The following command uses `aws s3` to pull the pretrained model to `src/checkpoints/gpubatch_resnet/`
```bash
./data_get.sh get
```### Conda
I've ported all this into a nice little conda environment for you to use
```bash
conda env create -f environment.yml
conda env create -f environment-dlib.yml
cd src
```## Generate Image
```bash
# Activate dlib conda
conda activate icface-dlib# Generate ./crop/1.png
python image_crop.py \
--image ./crop/test/trump.jpeg \
--id 1
```## Generate Video
```bash
# Activate icface conda
conda activate icface# Generate a sample video
python test.py \
--dataroot ./ \
--model pix2pix \
--which_model_netG resnet_6blocks \
--which_direction AtoB \
--dataset_mode aligned \
--norm batch \
--display_id 0 \
--batchSize 1 \
--loadSize 128 \
--fineSize 128 \
--no_flip \
--name gpubatch_resnet \
--how_many 1 \
--ndf 256 \
--ngf 128 \
--which_ref ./crop/.png \
--gpu_ids 0 \
--csv_path ./crop/videos/.csv \
--results_dir results_video# Splice the audio
ffmpeg -i ./crop/out.mp4 -i ./crop/videos/.mp4 -c copy -map 0:0 -map 1:1 -shortest ./crop/out_audio.mp4
```## Generating Action Points
```bash
# Outside Container
docker run -it --rm algebr/openface:latest# Outside Container (new terminal window)
docker cp src/crop/videos/.mp4 :/home/openface-build# Within container (/home/openface-build)
./build/bin/FeatureExtraction -f .mp4# Outside Container
docker cp :/home/openface-build/processed/.csv src/crop/videos
```## Attribution
- [https://github.com/Blade6570/icface](https://github.com/Blade6570/icface)
```bash
@article{tripathy+kannala+rahtu,
title={ICface: Interpretable and Controllable Face Reenactment Using GANs},
author={Tripathy, Soumya and Kannala, Juho and Rahtu, Esa},
journal={arXiv preprint arXiv:1904.01909},
year={2019}
}
```- [https://github.com/TadasBaltrusaitis/OpenFace](https://github.com/TadasBaltrusaitis/OpenFace)
- [Command line arguments](https://github.com/TadasBaltrusaitis/OpenFace/wiki/Command-line-arguments)
- [Docker setup](https://github.com/TadasBaltrusaitis/OpenFace/wiki#quickstart-usage-of-openface-with-docker-thanks-edgar-aroutiounian)