Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/ermongroup/markov-chain-gan
Code for "Generative Adversarial Training for Markov Chains" (ICLR 2017 Workshop)
https://github.com/ermongroup/markov-chain-gan
generative-adversarial-network generative-model iclr2017 markov-chain markov-chain-generator tensorflow
Last synced: 3 months ago
JSON representation
Code for "Generative Adversarial Training for Markov Chains" (ICLR 2017 Workshop)
- Host: GitHub
- URL: https://github.com/ermongroup/markov-chain-gan
- Owner: ermongroup
- License: mit
- Created: 2017-06-03T22:44:50.000Z (over 7 years ago)
- Default Branch: master
- Last Pushed: 2017-10-25T13:33:48.000Z (about 7 years ago)
- Last Synced: 2024-07-04T02:13:35.005Z (4 months ago)
- Topics: generative-adversarial-network, generative-model, iclr2017, markov-chain, markov-chain-generator, tensorflow
- Language: Python
- Size: 1.96 MB
- Stars: 79
- Watchers: 10
- Forks: 20
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Markov Chain GAN (MGAN)
TensorFlow code for [Generative Adversarial Training for Markov Chains](https://openreview.net/pdf?id=S1L-hCNtl) (ICLR 2017 Workshop Track).Work by [Jiaming Song](http://tsong.me), [Shengjia Zhao](http://szhao.me) and [Stefano Ermon](http://cs.stanford.edu/~ermon).
## Preprocessing
Running the code requires some preprocessing.
Namely, we transform the data to TensorFlow Records file to maximize speed
(as [suggested by TensorFlow](https://www.tensorflow.org/performance/performance_guide)).### MNIST
The data used for training is [here](https://drive.google.com/open?id=0B0LzoDno7qkJdDluZW5DSnpyWTg).
Download and place the directory in `~/data/mnist_tfrecords`.(This can be easily done by using a symlink or you can change the path in file `models/mnist/__init__.py`)
### CelebA
The data used for training is [here](https://drive.google.com/open?id=0B0LzoDno7qkJX3p2YS1DODNrM3c).
Download and place the directory in `~/data/celeba_tfrecords`.
## Running Experiments
```
python mgan.py [data] [model] -b [B] -m [M] -d [critic iterations] --gpus [gpus]
```
where `B` defines the steps from noise to data, `M` defines the steps from data to data, and `[gpus]` defines the `CUDA_VISIBLE_DEVICES` environment variable.### MNIST
```
python mgan.py mnist mlp -b 4 -m 3 -d 7 --gpus [gpus]
```### CelebA
Without shortcut connections:
```
python mgan.py celeba conv -b 4 -m 3 -d 7 --gpus [gpus]
```With shortcut connections (will observe a much slower transition):
```
python mgan.py celeba conv_res -b 4 -m 3 -d 7 --gpus [gpus]
```### Custom Experiments
It is easy to define your own problem and run experiments.
- Create a folder `data` under the `models` directory, and define `data_sampler` and `noise_sampler` in `__init__.py`.
- Create a file `model.py` under the `models/data` directory, and define the following:
- `class TransitionFunction(TransitionBase)` (Generator)
- `class Discriminator(DiscriminatorBase)` (Discriminator)
- `def visualizer(model, name)` (If you need to generate figures)
- `epoch_size` and `logging_freq`
- That's it!
## Figures
Each row is from a single chain, where we sample for 50 time steps.### MNIST
![MNIST MLP](figs/mnist_mlp.png)### CelebA
Without shortcut connections:
![CelebA 1-layer conv](figs/celeba_conv.png)With shortcut connections:
![CelebA 1-layer conv with shortcuts](figs/celeba_conv_res.png)## Related Projects
[a-nice-mc](https://github.com/jiamings/a-nice-mc): adversarial training for efficient MCMC kernels, which is based on this project.## Citation
If you use this code for your research, please cite our paper:```
@article{song2017generative,
title={Generative Adversarial Training for Markov Chains},
author={Song, Jiaming and Zhao, Shengjia and Ermon, Stefano},
journal={ICLR 2017 (Workshop Track)},
year={2017}
}
```## Contact
[[email protected]](mailto:[email protected])Code for the Pairwise Discriminator is not available at this moment; I will add that when I have the time.