Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/soheil-mp/gameassetlab
AI-powered game asset generation studio - transform prompts into professional game-ready assets using Stable Diffusion
https://github.com/soheil-mp/gameassetlab
game-characters generative-art stable-diffusion
Last synced: about 1 month ago
JSON representation
AI-powered game asset generation studio - transform prompts into professional game-ready assets using Stable Diffusion
- Host: GitHub
- URL: https://github.com/soheil-mp/gameassetlab
- Owner: soheil-mp
- License: mit
- Created: 2024-01-28T07:44:52.000Z (12 months ago)
- Default Branch: master
- Last Pushed: 2024-02-12T14:15:39.000Z (11 months ago)
- Last Synced: 2024-12-09T09:58:00.172Z (about 1 month ago)
- Topics: game-characters, generative-art, stable-diffusion
- Language: Jupyter Notebook
- Homepage:
- Size: 49.6 MB
- Stars: 6
- Watchers: 1
- Forks: 0
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Fine-Tuning Stable Diffusion on Gaming Characters
## Introduction
Welcome to this repository which is dedicated to fine-tuning Stable Diffusion, an open-source neural network for generating high-quality images. This project aims to enhance the capabilities of Stable Diffusion by fine-tuning it with gaming asset datasets.
Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. It is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt (Wikipedia contributors, 2024).
## Getting Started
### Prerequisites
- Python 3.9+
- Installing the dependencies listed in `requirements.txt`.
```
pip install -r requirements.txt
```
## Data Preparation
To prepare your data for fine-tuning the Stable Diffusion model, follow these steps:
### Step 1: Organize Your Image Dataset
Place all the images you wish to use for fine-tuning inside the `./data/images/` folder.
### Step 2: Write Prompt for Images
- Single Prompt (for Dreambooth):
In Dreambooth, only a single well-defined prompt is used for fine-tuning all the images.
- Multiple Prompt (for Keras-CV):
To implement stable diffusion with Keras-CV, we require a separate prompt for each image in our dataset. Utilize the BLIP model to automatically generate descriptive captions for each image in the dataset (using the `./preparation/auto prompting using BLIP.ipynb` script).
## Fine-Tuning Process
### Using Dreambooth
In this approach, the Stable Diffusion model got fine-tuned using Dreambooth. It involves setting up the model from huggingface and fine-tuning it with given gaming characters. The fine-tuned model then is used to generate new artwork related to gaming characters.
### Using Keras-CV
In this appraoch, the Stable Diffusion model got fine-tuned using TensorFlow's Keras framework. A custom class is used for handling the training processes (including loss calculation and gradient updates) which updates part of the weight of the the diffusion model.
## Evaluation
The fine-tuned stable diffusion model has generated a set of character images for evaluation. The images confirm the model's ability to:
- [x] Presence of Requested Object: Each image clearly displays the intended fantasy characters.
- [x] Variation in Design: There's evident variation among characters, with differences in shapes and colors showcased.
- [x] Background Simplicity: Characters are set against solid, non-complex backgrounds for clarity.
- [x] Separation from Background: The subjects are well-differentiated from the backdrop, ensuring easy discernibility.### Character Variation
A demon
Two Elfs
A wizard
### Colour Variation of Same Characters
Blue Knights
Red Knights
Green Knights