Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/openai/glide-text2im
GLIDE: a diffusion-based text-conditional image synthesis model
https://github.com/openai/glide-text2im
Last synced: 2 days ago
JSON representation
GLIDE: a diffusion-based text-conditional image synthesis model
- Host: GitHub
- URL: https://github.com/openai/glide-text2im
- Owner: openai
- License: mit
- Created: 2021-12-10T19:20:23.000Z (about 3 years ago)
- Default Branch: main
- Last Pushed: 2024-03-08T08:44:28.000Z (11 months ago)
- Last Synced: 2025-01-10T05:06:16.014Z (9 days ago)
- Language: Python
- Homepage:
- Size: 1.95 MB
- Stars: 3,569
- Watchers: 162
- Forks: 507
- Open Issues: 28
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-diffusion-categorized - [Code
- awesome - openai/glide-text2im - GLIDE: a diffusion-based text-conditional image synthesis model (Python)
- my-awesome - openai/glide-text2im - 03 star:3.6k fork:0.5k GLIDE: a diffusion-based text-conditional image synthesis model (Python)
- StarryDivineSky - openai/glide-text2im
README
# GLIDE
This is the official codebase for running the small, filtered-data GLIDE model from [GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models](https://arxiv.org/abs/2112.10741).
For details on the pre-trained models in this repository, see the [Model Card](model-card.md).
# Usage
To install this package, clone this repository and then run:
```
pip install -e .
```For detailed usage examples, see the [notebooks](notebooks) directory.
* The [text2im](notebooks/text2im.ipynb) [![][colab]][colab-text2im] notebook shows how to use GLIDE (filtered) with classifier-free guidance to produce images conditioned on text prompts.
* The [inpaint](notebooks/inpaint.ipynb) [![][colab]][colab-inpaint] notebook shows how to use GLIDE (filtered) to fill in a masked region of an image, conditioned on a text prompt.
* The [clip_guided](notebooks/clip_guided.ipynb) [![][colab]][colab-guided] notebook shows how to use GLIDE (filtered) + a filtered noise-aware CLIP model to produce images conditioned on text prompts.[colab]:
[colab-text2im]:
[colab-inpaint]:
[colab-guided]: