https://github.com/arogozhnikov/deepmmd-gan
Yet another (very simple) approach for adversarial training.
https://github.com/arogozhnikov/deepmmd-gan
Last synced: about 1 month ago
JSON representation
Yet another (very simple) approach for adversarial training.
- Host: GitHub
- URL: https://github.com/arogozhnikov/deepmmd-gan
- Owner: arogozhnikov
- License: apache-2.0
- Created: 2017-09-30T11:49:30.000Z (over 7 years ago)
- Default Branch: master
- Last Pushed: 2017-10-02T14:03:23.000Z (over 7 years ago)
- Last Synced: 2025-03-31T04:41:23.832Z (3 months ago)
- Language: Jupyter Notebook
- Homepage:
- Size: 9.46 MB
- Stars: 17
- Watchers: 5
- Forks: 9
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# DeepMMD-GAN
Yet another (very simple) approach for adversarial training.
- GANs are [adversarial networks](https://en.wikipedia.org/wiki/Generative_adversarial_network), no need to introduce those.
- MMD is maximum mean discrepancy (see e.g. [this presentation](http://alex.smola.org/teaching/iconip2006/iconip_3.pdf) by A. Smola),
it is based on a very simple idea of how to detect difference between distributions.A decade ago MMD was very popular, notable property of MMD (compared to other tests) is that it works well with kernels.
MMD was already combined with GANs, see e.g.
- [Generative Moment Matching Networks](https://arxiv.org/abs/1502.02761)
- [Generative models and model criticism via optimized MMD](https://arxiv.org/pdf/1611.04488.pdf)In my experiments something closer to traditional GANs was considered, because "discriminator" tries to find
an appropriate mapping to maximize MMD.## Technical details
Using [pytorch](https://pytorch.org) for implementation, I am building upon DCGAN (official implementation from pytorch repo).
Discriminator also contains BatchNormalization at the last layer to ensure the scale of output, that's quite critical,
as otherwise you can scale output to get enormous MMD.Probably, it is among shortest implementations of GANs.
## Results and observations
I haven't done much tuning, but here what was found
- I have observed no divergences when experimenting, things are rather stable
(at the same time at 64*64 definitely buggy fake pictures are appearing)
- quality of produced images isn't awesome, though comparable with other approaches
- results for different sizes of projections (2 ** 8 to 2 ** 15) aren't very different, while latter
are quite slow and slightly worse
(this was removed from the final version)