Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/vita-group/deepps
[ECCV 2020] "Deep Plastic Surgery: Robust and Controllable Image Editing with Human-Drawn Sketches"
https://github.com/vita-group/deepps
eccv2020 face-editing face-synthesis sketch
Last synced: about 2 months ago
JSON representation
[ECCV 2020] "Deep Plastic Surgery: Robust and Controllable Image Editing with Human-Drawn Sketches"
- Host: GitHub
- URL: https://github.com/vita-group/deepps
- Owner: VITA-Group
- License: mit
- Created: 2020-07-12T02:25:06.000Z (over 4 years ago)
- Default Branch: master
- Last Pushed: 2023-06-06T02:26:25.000Z (over 1 year ago)
- Last Synced: 2023-10-20T23:41:57.641Z (about 1 year ago)
- Topics: eccv2020, face-editing, face-synthesis, sketch
- Language: Python
- Homepage:
- Size: 5.01 MB
- Stars: 75
- Watchers: 10
- Forks: 10
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Deep Plastic Surgery
(a) controllable face synthesis
(b) controllable face editing
(c) adjusting refinement level l
Our framework allows users to (a) synthesize and (b) edit photos based on hand-drawn sketches. (c) Our model works robustly on various sketches by setting refinement level l adaptive to the quality of the input sketches, i.e., higher l for poorer sketches, thus tolerating the drawing errors and achieving the controllability on sketch faithfulness. Note that our model requires no real sketches for training.
This is a pytorch implementation of the paper.
Shuai Yang, Zhangyang Wang, Jiaying Liu and Zongming Guo.
Deep Plastic Surgery: Robust and Controllable Image Editing with Human-Drawn Sketches,
accepted by European Conference on Computer Vision (ECCV), 2020.[[Project]](https://williamyang1991.github.io/projects/ECCV2020) | [[Paper]](https://arxiv.org/abs/2001.02890) | [[Human-Drawn Facial Sketches]](https://williamyang1991.github.io/projects/ECCV2020/DPS/files/human-drawn_facial_sketches.zip)
Please consider citing our paper if you find the software useful for your work.
## Usage:
#### Prerequisites
- Python 2.7
- Pytorch 1.2.0
- matplotlib
- scipy
- Pillow
- [torchsample](https://github.com/ncullen93/torchsample/tree/master)
- opencv#### Install
- Clone this repo:
```
git clone https://github.com/TAMU-VITA/DeepPS.git
cd DeepPS/src
```## Testing Example
- Download pre-trained models from [[Google Drive]](https://drive.google.com/file/d/1Lv2j_CShUahPrRlGZGavpsFou3oF-yS1/view?usp=sharing) | [[Baidu Cloud]](https://pan.baidu.com/s/1QjOWk8Gw4UNN6ajHF8bMjQ)(code:oieu) to `../save/`
- Sketch-to-photo translation
- setting l to 1 to use refinment level 1.0
- setting l to -1 (default) means testing with multiple levels in \[0,1\] with step of l_step (default l_step = 0.25)
- Results can be found in `../output/`
```
python test.py --l 1.0
```
```
python test.py
```
- Face editing with refinment level 0.0, 0.25, 0.5, 0.75 and 1.0
- model_task to specify task. SYN for synthesis and EDT for editing
- specify the task, input image filename, model filename for F and G, respectively
- Results can be found in `../output/`
```
python test.py --model_task EDT --input_name ../data/EDT/4.png \
--load_F_name ../save/ECCV-EDT-celebaHQ-F256.ckpt --model_name ECCV-EDT-celebaHQ
```
- Use `--help` to view more testing options
```
python test.py --help
```
## Training Examples- Download pre-trained model F from [[Google Drive]](https://drive.google.com/file/d/1Lv2j_CShUahPrRlGZGavpsFou3oF-yS1/view?usp=sharing) | [[Baidu Cloud]](https://pan.baidu.com/s/1QjOWk8Gw4UNN6ajHF8bMjQ)(code:oieu) to `../save/`
- Prepare your data in `../data/dataset/train/` in form of (I,S):
- Please refer to [pix2pix](https://github.com/phillipi/pix2pix#datasets) for more details### Training on image synthesis task
- Train G with default parameters on 256\*256 images
- Progressively train G64, G128 and G256 on 64\*64, 128\*128 and 256\*256 images like pix2pixHD.
- step1: for each resolution, G is first trained with a fixed l = 1 to learn the greatest refinement level for 30 epoches (--epoch_pre)
- step2: we then use l ∈ {i/K}, i=0,...,K where K = 20 (i.e. --max_dilate 21) for 200 epoches (--epoch)
```
python train.py --save_model_name PSGAN-SYN
```
Saved model can be found at `../save/`
- Train G with default parameters on 64\*64 images
- Prepare your dataset in `../data/dataset64/train/` (for example, provided by [ContextualGAN](https://github.com/elliottwu/sText2Image))
- Prepare your network F pretrained on 64\*64 images and save it as `../save/ECCV-SYN-celeba-F64.ckpt`
- max_level = 1 to indicate only training on level 1 (level 1, 2, 3 --> image resolution 64\*64, 128\*128, 256\*256)
- use_F_level = 1 to indicate network F is used on level 1
- Specify the max dilation diameter, training level, F model image size
- AtoB means images are prepared in form of (S,I)
```
python train.py --train_path ../data/dataset64/ \
--max_dilate 9 --max_level 1 --use_F_level 1 \
--load_F_name ../save/ECCV-SYN-celeba-F64.ckpt --img_size 64 \
--save_model_name PSGAN-SYN-64 --AtoB
```### Training on image editing task
- Train G with default parameters on 256\*256 images
- Progressively train G64, G128 and G256 on 64\*64, 128\*128 and 256\*256 images like pix2pixHD.
- step1: for each resolution, G is first trained with a fixed l = 1 to learn the greatest refinement level for 30 epoches (--epoch_pre)
- step2: we then use l ∈ {i/K}, i=0,...,K where K = 20 (i.e. --max_dilate 21) for 200 epoches (--epoch)
```
python train.py --model_task EDT \
--load_F_name ../save/ECCV-EDT-celebaHQ-F256.ckpt --save_model_name PSGAN-EDT
```
Saved model can be found at `../save/`- Use `--help` to view more training options
```
python train.py --help
```
### ContactShuai Yang