Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/prathamesh-88/micro-gan
A simple implementation of Generative Adversarial Network (GAN) to generate previously unseen images based on training images.
https://github.com/prathamesh-88/micro-gan
Last synced: 9 days ago
JSON representation
A simple implementation of Generative Adversarial Network (GAN) to generate previously unseen images based on training images.
- Host: GitHub
- URL: https://github.com/prathamesh-88/micro-gan
- Owner: prathamesh-88
- License: mit
- Created: 2023-09-05T17:36:48.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2023-09-05T18:52:24.000Z (about 1 year ago)
- Last Synced: 2024-04-24T10:06:52.286Z (7 months ago)
- Language: Python
- Size: 625 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Micro GAN (Generative Adversarial Network)
## Generative Adversarial Networks
Generative Adversarial Network or GAN for short is a deep-learning based training architecture. It consists of 2 models (neural networks) acting as each other's adversaries that optimize each other during the training process. This architecture is responsible for most of your favourite text-to-image and image generation models like Midjourney.
## Micro GAN
Models like Midjourney, Stable Diffusion, Imagegen are highly sophisticated. They are trained on millions of images with billions of parameters. This repo showcases, a very minimal and simple example of a GAN that generates image based on the images it is trained on.
For the sake of simplicity, it generates a 64x64 pixel image. This can be changed by altering the models based on the size of image you wish to generate.
## Components
A generic GAN consists of 2 parts.
1. A **Generator** that generates the images.
2. A **Discriminator** that differentiates between a real image and generated image### Generator
The architecture of the Generator model looks something like this:
![Architecture for generator](./diagrams/generator.png "Generator Model")
### Discriminator
The architecture of the Discriminator model looks something like this:
![Architecture for discriminator](./diagrams/discriminator.png "Discriminator Model")
### Dataset
For training this model, I used the Abstract art dataset from Kaggle.
Link to dataset: [https://www.kaggle.com/datasets/bryanb/abstract-art-gallery](https://www.kaggle.com/datasets/bryanb/abstract-art-gallery)