Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/rosinality/glow-pytorch
PyTorch implementation of Glow
https://github.com/rosinality/glow-pytorch
Last synced: 3 months ago
JSON representation
PyTorch implementation of Glow
- Host: GitHub
- URL: https://github.com/rosinality/glow-pytorch
- Owner: rosinality
- License: mit
- Created: 2018-07-12T06:34:34.000Z (over 6 years ago)
- Default Branch: master
- Last Pushed: 2021-11-20T23:21:48.000Z (almost 3 years ago)
- Last Synced: 2024-04-20T12:39:17.530Z (7 months ago)
- Language: Python
- Size: 11.7 MB
- Stars: 494
- Watchers: 9
- Forks: 97
- Open Issues: 30
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# glow-pytorch
PyTorch implementation of Glow, Generative Flow with Invertible 1x1 Convolutions (https://arxiv.org/abs/1807.03039)Usage:
> python train.py PATH
as trainer uses ImageFolder of torchvision, input directory should be structured like this even when there are only 1 classes. (Currently this implementation does not incorporate class classification loss.)
> PATH/class1
> PATH/class2
> ...## Notes
![Sample](sample.png)
I have trained model on vanilla celebA dataset. Seems like works well. I found that learning rate (I have used 1e-4 without scheduling), learnt prior, number of bits (in this cases, 5), and using sigmoid function at the affine coupling layer instead of exponential function is beneficial to training a model.
In my cases, LU decomposed invertible convolution was much faster than plain version. So I made it default to use LU decomposed version.
![Progression of samples](progression.gif)
Progression of samples during training. Sampled once per 100 iterations during training.