Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/nashory/pggan-pytorch
:fire::fire: PyTorch implementation of "Progressive growing of GANs (PGGAN)" :fire::fire:
https://github.com/nashory/pggan-pytorch
celeba-hq-dataset gan generative-adversarial-network progressive-gan progressively-growing-gan pytorch tensorboard
Last synced: 8 days ago
JSON representation
:fire::fire: PyTorch implementation of "Progressive growing of GANs (PGGAN)" :fire::fire:
- Host: GitHub
- URL: https://github.com/nashory/pggan-pytorch
- Owner: nashory
- License: mit
- Created: 2017-11-13T05:01:40.000Z (about 7 years ago)
- Default Branch: master
- Last Pushed: 2022-11-22T02:05:22.000Z (almost 2 years ago)
- Last Synced: 2024-08-01T15:31:55.650Z (3 months ago)
- Topics: celeba-hq-dataset, gan, generative-adversarial-network, progressive-gan, progressively-growing-gan, pytorch, tensorboard
- Language: Python
- Homepage:
- Size: 98.6 KB
- Stars: 814
- Watchers: 19
- Forks: 134
- Open Issues: 37
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
## Pytorch Implementation of "Progressive growing GAN (PGGAN)"
PyTorch implementation of [PROGRESSIVE GROWING OF GANS FOR IMPROVED QUALITY, STABILITY, AND VARIATION](http://research.nvidia.com/sites/default/files/pubs/2017-10_Progressive-Growing-of//karras2017gan-paper.pdf)
__YOUR CONTRIBUTION IS INVALUABLE FOR THIS PROJECT :)__![image](https://puu.sh/ydG0E/e0f32b0d92.png)
## What's different from official paper?
+ original: trans(G)-->trans(D)-->stab / my code: trans(G)-->stab-->transition(D)-->stab
+ no use of NIN layer. The unnecessary layers (like low-resolution blocks) are automatically flushed out and grow.
+ used torch.utils.weight_norm for to_rgb_layer of generator.
+ No need to implement the the Celeb A data, Just come with your own dataset :)## How to use?
__[step 1.] Prepare dataset__
The author of progressive GAN released CelebA-HQ dataset, and which Nash is working on over on the branch that i forked this from. For my version just make sure that all images are the children of that folder that you declare in Config.py. Also i warn you that if you use multiple classes, they should be similar as to not end up with attrocities.~~~
---------------------------------------------
The training data folder should look like :|--Your Folder
|--image 1
|--image 2
|--image 3 ...
---------------------------------------------
~~~__[step 2.] Prepare environment using virtualenv__
+ you can easily set PyTorch (v0.3) and TensorFlow environment using virtualenv.
+ CAUTION: if you have trouble installing PyTorch, install it mansually using pip. [[PyTorch Install]](http://pytorch.org/)
+ For install please take your time and install all dependencies of PyTorch and also install tensorflow
~~~
$ virtualenv --python=python2.7 venv
$ . venv/bin/activate
$ pip install -r requirements.txt
$ conda install pytorch torchvision -c pytorch
~~~__[step 3.] Run training__
+ edit `config.py` to change parameters. (don't forget to change path to training images)
+ specify which gpu devices to be used, and change "n_gpu" option in `config.py` to support Multi-GPU training.
+ run and enjoy!~~~~
(example)
If using Single-GPU (device_id = 0):
$ vim config.py --> change "n_gpu=1"
$ CUDA_VISIBLE_DEVICES=0 python trainer.py
If using Multi-GPUs (device id = 1,3,7):
$ vim config.py --> change "n_gpu=3"
$ CUDA_VISIBLE_DEVICES=1,3,7 python trainer.py
~~~~
__[step 4.] Display on tensorboard__ (At the moment skip this part)
+ you can check the results on tensorboard.
~~~
$ tensorboard --logdir repo/tensorboard --port 8888
$ :8888 at your browser.
~~~
__[step 5.] Generate fake images using linear interpolation__
~~~
CUDA_VISIBLE_DEVICES=0 python generate_interpolated.py
~~~
## Experimental results
The result of higher resolution(larger than 256x256) will be updated soon.__Generated Images__
__Loss Curve__
![image](https://puu.sh/yuhi4/a49686b220.png)
## To-Do List (will be implemented soon)
- [ ] Support WGAN-GP loss
- [ ] training resuming functionality.
- [ ] loading CelebA-HQ dataset (for 512x512 and 1024x0124 training)## Compatability
+ cuda v8.0 (if you dont have it dont worry)
+ Tesla P40 (you may need more than 12GB Memory. If not, please adjust the batch_table in `dataloader.py`)## Acknowledgement
+ [tkarras/progressive_growing_of_gans](https://github.com/tkarras/progressive_growing_of_gans)
+ [nashory/progressive-growing-torch](https://github.com/nashory/progressive-growing-torch)
+ [TuXiaokang/DCGAN.PyTorch](https://github.com/TuXiaokang/DCGAN.PyTorch)##
## Author
MinchulShin, [@nashory](https://github.com/nashory)
## Contributors
DeMarcus Edwards, [@Djmcflush](https://github.com/Djmcflush)
MakeDirtyCode, [@MakeDirtyCode](https://github.com/MakeDirtyCode)
Yuan Zhao, [@yuanzhaoYZ](https://github.com/yuanzhaoYZ)
zhanpengpan, [@szupzp](https://github.com/szupzp)