Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/jgkwak95/SURF-GAN
Official Pytorch implementation of "Injecting 3D Perception of Controllable NeRF-GAN into StyleGAN for Editable Portrait Image Synthesis", ECCV 2022
https://github.com/jgkwak95/SURF-GAN
3d 3d-aware-image-synthesis eccv2022 generative-adversarial-network image-to-image-translation neural-rendering stylegan2 toonify
Last synced: about 22 hours ago
JSON representation
Official Pytorch implementation of "Injecting 3D Perception of Controllable NeRF-GAN into StyleGAN for Editable Portrait Image Synthesis", ECCV 2022
- Host: GitHub
- URL: https://github.com/jgkwak95/SURF-GAN
- Owner: jgkwak95
- License: mit
- Created: 2022-07-21T17:24:38.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2023-02-04T07:37:30.000Z (almost 2 years ago)
- Last Synced: 2023-02-28T16:24:45.356Z (over 1 year ago)
- Topics: 3d, 3d-aware-image-synthesis, eccv2022, generative-adversarial-network, image-to-image-translation, neural-rendering, stylegan2, toonify
- Language: Python
- Homepage:
- Size: 498 MB
- Stars: 93
- Watchers: 3
- Forks: 12
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-NeRF - Torch - GAN into StyleGAN for Editable Portrait Image Synthesis](https://arxiv.org/pdf/2207.10257.pdf)|| (Papers / NeRF Related Tasks)
- awesome-NeRF - Torch - GAN into StyleGAN for Editable Portrait Image Synthesis](https://arxiv.org/pdf/2207.10257.pdf)|| (Papers / NeRF Related Tasks)
README
# Injecting 3D Perception of Controllable NeRF-GAN into StyleGAN for Editable Portrait Image Synthesis
## [Project page](https://jgkwak95.github.io/surfgan/) | [Paper](http://arxiv.org/abs/2207.10257)
>**"Injecting 3D Perception of Controllable NeRF-GAN into StyleGAN for Editable Portrait Image Synthesis"**
>[Jeong-gi Kwak](https://jgkwak95.github.io/), Yuanming Li, Dongsik Yoon, Donghyeon Kim, David Han, Hanseok Ko
>**ECCV 2022**
This repository includes the official Pytorch implementation of SURF-GAN.
# SURF-GAN
SURF-GAN, which is a NeRF-based 3D-aware GAN, can discover disentangled semantic attributes in an unsupervised manner.
(Tranined on 64x64 CelebA and rendered with 256x256)## Get started
- #### Clone the repo.
```
git clone https://github.com/jgkwak95/SURF-GAN.git
cd SURF-GAN
```
- #### Create virtual environment
```
conda create -n surfgan python=3.7.1
conda activate surfgan
conda install -c pytorch-lts pytorch torchvision
pip install --no-cache-dir -r requirements.txt
```## Train SURF-GAN
At first, look curriculum.py and specify dataset and training options.
```
# CelebA
python train_surf.py --output_dir your-exp-name \
--curriculum CelebA_single
```
### Pretrained model
Or, you can use the [pretrained model](https://drive.google.com/file/d/19ufd6VGQ4pR2AOEMBDFwvS9seZ-C6-8U/view?usp=sharing).
## Semantic attribute discovery
Let's traverse each dimension with discovered semantics:
```
python discover_semantics.py --experiment your-exp-name \
--image_size 256 \
--ray_step_multiplier 2 \
--num_id 9 \
--traverse_range 3.0 \
--intermediate_points 9 \
--curriculum CelebA_single
```
The default ckpt file to traverse is the latest file (generator.pth).
If you want to check specific cpkt, add this in your command line, for example,
```
--specific_ckpt 140000_64_generator.pth
```
## Control pose
In addition, you can control only camera paramters:
```
python control_pose.py --experiment your-exp-name \
--image_size 128 \
--ray_step_multiplier 2 \
--num_id 9 \
--intermediate_points 9 \
--mode yaw \
--curriculum CelebA_single \
```
## Render video
- #### Moving camera
Set the mode: yaw, pitch, fov, etc.
You can also make your trajectory.
```
python render_video.py --experiment your-exp-name \
--image_size 128 \
--ray_step_multiplier 2 \
--num_frames 100 \
--curriculum CelebA_single \
--mode yaw
```- #### Moving camera with a specific semantic
Choose an attribute that you want to control LiDj.
```
python render_video_semantic.py --experiment your-exp-name \
--image_size 128 \
--ray_step_multiplier 2 \
--num_frames 100 \
--traverse_range 3.0 \
--intermediate_points \
--curriculum CelebA_single \
--mode circle
--L 2
--D 4
```
# 3D-Controllable StyleGAN
Injecting the prior of SURF-GAN into StyleGAN for controllable generation.
Also, it is compatible with many StyleGAN-based methods.
### Video
| Pose control | + Style ([Toonify](https://github.com/justinpinkney/toonify)) |
| ------ | ------|
| | |
| | |
It is capable of editing real images directly. (with [HyperStyle](https://github.com/yuval-alaluf/hyperstyle))
| Pose | +Illumination (using SURF-GAN samples) |
| ------ | ------|
| | || +Hair color (using SURF-GAN samples) | +Smile(using [InterFaceGAN](https://github.com/genforce/interfacegan)) |
| ------ | ------|
| | |
## Citation```
@inproceedings{kwak2022injecting,
title={Injecting 3D Perception of Controllable NeRF-GAN into StyleGAN for Editable Portrait Image Synthesis},
author={Kwak, Jeong-gi and Li, Yuanming and Yoon, Dongsik and Kim, Donghyeon and Han, David and Ko, Hanseok},
booktitle={European Conference on Computer Vision},
pages={236--253},
year={2022},
organization={Springer}
}
```## Acknowledgments
- SURF-GAN is bulided upon the [pi-GAN](https://github.com/marcoamonteiro/pi-GAN) implementation and inspired by [EigenGAN](https://github.com/LynnHo/EigenGAN-Tensorflow) ([EigenGAN-pytorch](https://github.com/bryandlee/eigengan-pytorch)). Thanks to the authors for their excellent work!
- We used [pSp encoder](https://github.com/eladrich/pixel2style2pixel) and [StyleGAN2-pytorch](https://github.com/rosinality/stylegan2-pytorch) to build 3D-controllable StyleGAN. For editing in-the-wild real images, we exploited [e4e](https://github.com/omertov/encoder4editing) and [HyperStyle](https://github.com/yuval-alaluf/hyperstyle) with our 3D-controllable StyleGAN.