Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/nii-yamagishilab/ClassNSeg
Implementation and demonstration of the paper: Multi-task Learning for Detecting and Segmenting Manipulated Facial Images and Videos
https://github.com/nii-yamagishilab/ClassNSeg
Last synced: 20 days ago
JSON representation
Implementation and demonstration of the paper: Multi-task Learning for Detecting and Segmenting Manipulated Facial Images and Videos
- Host: GitHub
- URL: https://github.com/nii-yamagishilab/ClassNSeg
- Owner: nii-yamagishilab
- License: bsd-3-clause
- Created: 2019-06-17T08:05:52.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2019-06-18T09:51:29.000Z (over 5 years ago)
- Last Synced: 2024-08-01T01:27:51.404Z (4 months ago)
- Language: Python
- Size: 15.2 MB
- Stars: 78
- Watchers: 4
- Forks: 12
- Open Issues: 4
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-deepfakes - [github
README
# ClassNSeg
Implementation of the paper: Multi-task Learning for Detecting and Segmenting Manipulated Facial Images and Videos (BTAS 2019).
You can clone this repository into your favorite directory:
$ git clone https://github.com/nii-yamagishilab/ClassNSeg
## Requirement
- PyTorch 1.0
- TorchVision
- scikit-learn
- Numpy
- tqdm
- PIL## Project organization
- Datasets folder, where you can place your training, evaluation, and test set:./datasets
- Checkpoint folder, where the training outputs will be stored:./checkpoints
- Test image folder, where images are used for segmentation demonstration during training:
./test_img
Pre-trained models with settings described in our paper are provided in the checkpoints folder.
## Dataset
Each dataset has two parts:
- Original images: ./datasets/\/\/original
- Altered images: ./datasets/\/\/alteredAll datasets need to be pre-processed to crop facial areas and add segmentation maps. It could be done by using these scripts:
./create_dataset_Face2Face.py
./create_dataset_Deepfakes.py
./create_dataset_FaceSwap.py
**Note**: Parameters with detail explanation could be found in the corresponding source code.## Training
**Note**: Parameters with detail explanation could be found in the corresponding source code.$ python train.py --dataset datasets/face2face/source-to-target --train_set train --val_set validation --outf checkpoints/full --batchSize 64 --niter 100
## Finetuning
Before doing finetuning, copy the best encoder_x.pt and decoder_x.pt checkpoints to checkpoints/finetune with x is the checkpoint number and rename them to encoder_0.pt and decoder_0.pt.**Note**: Parameters with detail explanation could be found in the corresponding source code.
$ python finetune.py --dataset datasets/finetune --train_set train --val_set validation --outf checkpoints/finetune --batchSize 64 --niter 50
## Evaluating
**Note**: Parameters with detail explanation could be found in the corresponding source code.**Classification:**
$ python test_cls.py --dataset --test_set test --outf checkpoints --id
**Segmentation:**
$ python test_seg.py --dataset --test_set test --outf checkpoints --id
Beside testing on still images, the proposed method can be applied on videos. One recommendation is using OpenCV 3.4 with Caffe framework for face detection (Visit here for more information). Another option is using Dlib.
## Authors
- Huy H. Nguyen (https://researchmap.jp/nhhuy/?lang=english)
- Fuming Fang (https://researchmap.jp/fang/?lang=english)
- Junichi Yamagishi (https://researchmap.jp/read0205283/?lang=english)
- Isao Echizen (https://researchmap.jp/echizenisao/?lang=english)## Acknowledgement
This research was supported by JSPS KAKENHI Grant Number JP16H06302, JP18H04120, and JST CREST Grant Number JPMJCR18A6, Japan.## Reference
H. H. Nguyen, F. Fang, J. Yamagishi, and I. Echizen, “Multi-task Learning for Detecting and Segmenting Manipulated Facial Images and Videos,” Proc. of the 10th IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS), 8 pages, (September 2019)