https://github.com/jsflo/generative-adversarial-nets
Different GAN (Generative Adversarial Network) architectures in TensorFlow
https://github.com/jsflo/generative-adversarial-nets
digimon generative-adversarial-network pokemon tensorflow
Last synced: 10 months ago
JSON representation
Different GAN (Generative Adversarial Network) architectures in TensorFlow
- Host: GitHub
- URL: https://github.com/jsflo/generative-adversarial-nets
- Owner: JsFlo
- Created: 2017-09-30T22:30:58.000Z (about 8 years ago)
- Default Branch: master
- Last Pushed: 2018-06-28T01:15:49.000Z (over 7 years ago)
- Last Synced: 2025-01-08T02:16:03.576Z (12 months ago)
- Topics: digimon, generative-adversarial-network, pokemon, tensorflow
- Language: Python
- Size: 13.2 MB
- Stars: 1
- Watchers: 3
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Generative-Adversarial-Nets
Different GAN ([Generative Adversarial Network](http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf)) architectures in TensorFlow
## w gan (*./w_gan/*)
### Trained on Digimon images:


### Trained on the original 150 Pokemon:

### Wasserstein GAN
https://arxiv.org/abs/1701.07875
### Generator
tf.layers.conv2d_transpose
tf.contrib.layers.batch_norm
### Discriminator
tf.layers.conv2d
tf.contrib.layers.batch_norm
def leaky_relu(input, name, leak=0.2):
return tf.maximum(input, leak * input, name=name)
w- gan
## GAN (*/vaniall_gan/*)
A Generative Adversarial Net implemented with **TensorFlow** using the
**MNIST** data set.
#### Generator:
* Input: **100**
* Output: **784**
* Purpose: Will learn to **output images** that **look** like a **real**
image from **random input**.
#### Discriminator:
* Input: **784**
* Output: **1**
* Purpose: Will learn to tell a **real** ("looks like it could be a real image in MNIST dataset") **image**(784) from a fake one.
#### Notes and Outputs
A problem with the way that I built this is that I used the **same architecture**
for **both** the **generator** and **discriminator**. Although I thought this save me, the developer, a lot of time it actually
caused a lot of problems with trying to pigeonhole that architecture to work with a smaller input **(Discriminator: 28x28 vs 10x10 : Generator)**.
##### Architecture
* conv1 -> relu -> pool ->
* conv2 -> relu -> pool ->
* conv3 -> relu -> pool ->
* fullyConnected1 -> relu ->
* fullyConnected2 -> relu ->
* fullyConnected3 ->
100 random numbers -> Generator -> ImageOutput -> Discriminator -> (Real|Fake)








## ColorGan