https://github.com/Mikubill/sketch2img
Sketch-to-Image Generation without retraining diffusion models
https://github.com/Mikubill/sketch2img
Last synced: 3 months ago
JSON representation
Sketch-to-Image Generation without retraining diffusion models
- Host: GitHub
- URL: https://github.com/Mikubill/sketch2img
- Owner: Mikubill
- Created: 2023-01-20T10:35:31.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2023-02-01T04:11:30.000Z (over 2 years ago)
- Last Synced: 2024-08-01T18:38:53.709Z (11 months ago)
- Language: Python
- Homepage:
- Size: 424 KB
- Stars: 46
- Watchers: 5
- Forks: 6
- Open Issues: 4
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-diffusion-categorized - [Code
README
# sketch2img
Sketch-to-Image Generation without retraining diffusion models. (WIP)
Currently supported method:
* Sketch-Guided Text-to-Image Diffusion (google)
* inject with additional pretrained self-attn layer or cross-attn layer (gligen?)Note: Paint-With-Words already moved to https://github.com/Mikubill/sd-paint-with-words
## Sketch-Guided Text-to-Image Diffusion
[Paper](https://sketch-guided-diffusion.github.io/files/sketch-guided-preprint.pdf) | Demo

Sketch-Guided Text-to-Image is a method proposed by researchers in Google Research to guide the inference process of a pretrained text-to-image diffusion model with an edge predictor that operates on the internal activations of the core network of the diffusion model, encouraging the edge of the synthesized image to follow a reference sketch.
Pretrained LGP Weights: https://huggingface.co/nyanko7/sketch2img-edge-predictor-train/blob/main/edge_predictor.pt