Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/rosinality/vq-vae-2-pytorch
Implementation of Generating Diverse High-Fidelity Images with VQ-VAE-2 in PyTorch
https://github.com/rosinality/vq-vae-2-pytorch
vq-vae vq-vae-2
Last synced: 30 days ago
JSON representation
Implementation of Generating Diverse High-Fidelity Images with VQ-VAE-2 in PyTorch
- Host: GitHub
- URL: https://github.com/rosinality/vq-vae-2-pytorch
- Owner: rosinality
- License: other
- Created: 2019-06-10T00:59:16.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2023-02-15T00:52:00.000Z (over 1 year ago)
- Last Synced: 2024-10-01T20:24:45.589Z (about 1 month ago)
- Topics: vq-vae, vq-vae-2
- Language: Python
- Homepage:
- Size: 6.72 MB
- Stars: 1,605
- Watchers: 20
- Forks: 270
- Open Issues: 46
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# vq-vae-2-pytorch
Implementation of Generating Diverse High-Fidelity Images with VQ-VAE-2 in PyTorch## Update
* 2020-06-01
train_vqvae.py and vqvae.py now supports distributed training. You can use --n_gpu [NUM_GPUS] arguments for train_vqvae.py to use [NUM_GPUS] during training.
## Requisite
* Python >= 3.6
* PyTorch >= 1.1
* lmdb (for storing extracted codes)[Checkpoint of VQ-VAE pretrained on FFHQ](vqvae_560.pt)
## Usage
Currently supports 256px (top/bottom hierarchical prior)
1. Stage 1 (VQ-VAE)
> python train_vqvae.py [DATASET PATH]
If you use FFHQ, I highly recommends to preprocess images. (resize and convert to jpeg)
2. Extract codes for stage 2 training
> python extract_code.py --ckpt checkpoint/[VQ-VAE CHECKPOINT] --name [LMDB NAME] [DATASET PATH]
3. Stage 2 (PixelSNAIL)
> python train_pixelsnail.py [LMDB NAME]
Maybe it is better to use larger PixelSNAIL model. Currently model size is reduced due to GPU constraints.
## Sample
### Stage 1
Note: This is a training sample
![Sample from Stage 1 (VQ-VAE)](stage1_sample.png)