https://github.com/masaishi/parediffusers
The library `pared` down the features of `diffusers` implemented the minimum function to generate images without using huggingface/diffusers to understand the inner workings of the library.
https://github.com/masaishi/parediffusers
Last synced: about 2 months ago
JSON representation
The library `pared` down the features of `diffusers` implemented the minimum function to generate images without using huggingface/diffusers to understand the inner workings of the library.
- Host: GitHub
- URL: https://github.com/masaishi/parediffusers
- Owner: masaishi
- Created: 2024-01-02T00:01:30.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-05-05T22:12:00.000Z (about 1 year ago)
- Last Synced: 2025-03-29T21:11:25.985Z (3 months ago)
- Language: Jupyter Notebook
- Homepage:
- Size: 64.2 MB
- Stars: 3
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# PareDiffusers
[](https://github.com/masaishi/parediffusers/blob/main/src/parediffusers/pipeline.py)
[](https://pypi.org/project/parediffusers)
The library `pared` down the features of `diffusers` implemented the minimum function to generate images without using [huggingface/diffusers](https://github.com/huggingface/diffusers/tree/main) functions to understand the inner workings of the library.
## Why PareDiffusers?
PareDiffusers was born out of a curiosity and a desire to demystify the processes of generating images by diffusion models and the workings of the diffusers library.I will write blog-style [notebooks](./notebooks) understanding how works using a top-down approach. First, generate images using diffusers to understand the overall flow, then gradually replace code with Pytorch code. In the end, we will write the code for the [PareDiffusers code](./src/parediffusers) that does not include diffusers code.
I hope that it helps others who share a similar interest in the inner workings of image generation.
## Versions
- v0.0.0: After Ch0.0.0, inprement StableDiffusionPipeline.
- v0.1.2: After Ch0.1.0, imprement DDIMScheduler.
- v0.2.0: After Ch0.2.0, imprement UNet2DConditionModel.
- v0.3.1: After Ch0.3.0, imprement AutoencoderKL.## Table of Contents
### [Ch0.0.0 PareDiffusersPipeline](./notebooks/ch0.0.0_ParedDiffusionPipeline.ipynb) [](https://colab.research.google.com/github/masaishi/parediffusers/blob/main/notebooks/ch0.0.0_ParedDiffusionPipeline.ipynb)
version: v0.0.0
- [x] Generate images using diffusers
- [x] Imprement StableDiffusionPipeline
- [ ] Imprement DDIMScheduler
- [ ] Imprement UNet2DConditionModel
- [ ] Imprement AutoencoderKL
### [Ch0.0.1 Test parediffusers](./notebooks/ch0.0.1_Test_parediffusers.ipynb) [](https://colab.research.google.com/github/masaishi/parediffusers/blob/main/notebooks/ch0.0.1_Test_parediffusers.ipynb)
- Test PareDiffusersPipeline by pip install parediffusers.
### [Ch0.0.2 Play prompt_embeds](./notebooks/ch0.0.2_Play_prompt_embeds.ipynb) [](https://colab.research.google.com/github/masaishi/parediffusers/blob/main/notebooks/ch0.0.2_Play_prompt_embeds.ipynb)
- Play prompt_embeds, make gradation images by using two prompts.
### [Ch0.1.0: PareDDIMScheduler](./notebooks/ch0.1.0_PareDDIMScheduler.ipynb) [](https://colab.research.google.com/github/masaishi/parediffusers/blob/main/notebooks/ch0.1.0_PareDDIMScheduler.ipynb)
version: v0.1.3
- [x] Imprement images using diffusers
- [x] Imprement StableDiffusionPipeline
- [x] Imprement DDIMScheduler
- [ ] Imprement UNet2DConditionModel
- [ ] Imprement AutoencoderKL
### [Ch0.1.1: Test parediffusers](./notebooks/ch0.1.1_Test_parediffusers.ipynb) [](https://colab.research.google.com/github/masaishi/parediffusers/blob/main/notebooks/ch0.1.1_Test_parediffusers.ipynb)
- Test PareDiffusersPipeline by pip install parediffusers.
### [Ch0.2.0: PareUNet2DConditionModel](./notebooks/ch0.2.0_PareUNet2DConditionModel.ipynb) [](https://colab.research.google.com/github/masaishi/parediffusers/blob/main/notebooks/ch0.2.0_PareUNet2DConditionModel.ipynb)
version: v0.2.0
- [x] Generate images using diffusers
- [x] Imprement StableDiffusionPipeline
- [x] Imprement DDIMScheduler
- [x] Imprement UNet2DConditionModel
- [ ] Imprement AutoencoderKL
### [Ch0.2.1: Test parediffusers](./notebooks/ch0.2.1_Test_PareDiffusersPipeline.ipynb) [](https://colab.research.google.com/github/masaishi/parediffusers/blob/main/notebooks/ch0.2.1_Test_PareDiffusersPipeline.ipynb)
- Test PareDiffusersPipeline by pip install parediffusers.
### [Ch0.3.0: PareAutoencoderKL](./notebooks/ch0.3.0_PareAutoencoderKL.ipynb) [](https://colab.research.google.com/github/masaishi/parediffusers/blob/main/notebooks/ch0.3.0_PareAutoencoderKL.ipynb)
version: v0.3.1
- [x] Generate images using diffusers
- [x] Imprement StableDiffusionPipeline
- [x] Imprement DDIMScheduler
- [x] Imprement UNet2DConditionModel
- [x] Imprement AutoencoderKL
### [Ch0.3.1: Test parediffusers](./notebooks/ch0.3.1_Test_PareDiffusersPipeline.ipynb) [](https://colab.research.google.com/github/masaishi/parediffusers/blob/main/notebooks/ch0.3.1_Test_PareDiffusersPipeline.ipynb)
- Test PareDiffusersPipeline by pip install parediffusers.## Usage
```python
import torch
from parediffusers import PareDiffusionPipelinedevice = torch.device("cuda")
dtype = torch.float16
model_name = "stabilityai/stable-diffusion-2"pipe = PareDiffusionPipeline.from_pretrained(model_name, device=device, dtype=dtype)
prompt = "painting depicting the sea, sunrise, ship, artstation, 4k, concept art"
image = pipe(prompt)
display(image)
```## Contribution
I am starting this project to help me understand the code in order to participate in diffusers' OSS. So, I think there may be some mistakes in my explanation, so if you find any, please feel free to correct them via an issue or pull request.