Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/dvirsamuel/SeedSelect
Code for our papers : "Generating images of rare concepts using pre-trained diffusion models" (AAAI 24) and "Norm-guided latent space exploration for text-to-image generation" (Neurips 23)
https://github.com/dvirsamuel/SeedSelect
Last synced: 13 days ago
JSON representation
Code for our papers : "Generating images of rare concepts using pre-trained diffusion models" (AAAI 24) and "Norm-guided latent space exploration for text-to-image generation" (Neurips 23)
- Host: GitHub
- URL: https://github.com/dvirsamuel/SeedSelect
- Owner: dvirsamuel
- Created: 2023-04-28T06:38:55.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2023-12-27T08:14:09.000Z (11 months ago)
- Last Synced: 2024-08-01T18:37:31.055Z (3 months ago)
- Language: Python
- Homepage:
- Size: 8.5 MB
- Stars: 65
- Watchers: 6
- Forks: 2
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-diffusion-categorized - [Code
README
# Code for *Seed Exploration in Text-to-Image Diffusion Models*
## Generating images of rare concepts using pre-trained diffusion models (AAAI 24)
> Dvir Samuel, Rami Ben-Ari, Simon Raviv, Nir Darshan, Gal Chechik
> Bar-Ilan University, OriginAI, NVIDIA Research>
>
> Text-to-image diffusion models can synthesize high-quality images, but they have various limitations. Here we highlight a common failure mode of these models, namely, generating uncommon concepts and structured concepts like hand palms. We show that their limitation is partly due to the long-tail nature of their training data: web-crawled data sets are strongly unbalanced, causing models to under-represent con- cepts from the tail of the distribution. We characterize the effect of unbalanced training data on text-to-image models and offer a remedy. We show that rare concepts can be correctly generated by carefully selecting suitable generation seeds in the noise space, using a small reference set of images, a technique that we call SeedSelect. SeedSelect does not require retraining or finetuning the diffusion model. We assess the faithfulness, quality and diversity of SeedSelect in creating rare objects and generating complex formations like hand im- ages, and find it consistently achieves superior performance. We further show the advantage of SeedSelect in semantic data augmentation. Generating semantically appropriate images can successfully improve performance in few-shot recognition benchmarks, for classes from the head and from the tail of the training data of diffusion models.
Text-to-image diffusion models fail to generate images of rare concepts.
Our novel method, SeedSelect, improves generation of all uncommon and ill-formed concepts. It operates by learning a generation seed from just a few training samples.### Setup
#### Hugging Face Diffusers Library
Our code relies on Hugging Face's [diffusers](https://github.com/huggingface/diffusers) library for downloading the Stable Diffusion v2.1-base model.
See [Stable Diffusion v2-1-base Model Card](https://huggingface.co/stabilityai/stable-diffusion-2-1-base) for more details.### Usage
Example generations outputted by Stable Diffusion vs SeedSelect.To generate an image, you can simply run the `main.py` script:
```
python main.py
```
**method:** _StableDiffusion_ or _SeedSelect_
**prompt:** The name of the rare concept.
**dir:** Path to the directory of imagesFor example, run the following command to optimize the seed for the rare concept "tiger cat":
```
python main.py SeedSelect tiger_cat imgs/tiger_cat
```Note that SeedSelect might be sensitive to the provided images and hyper-parameters. Furthermore, it might take more than 30 iterations (~5 minutes) to converge.
For a faster convergence, use Norm-aware optimization for SeedSelect, described below.### Distribution of concepts in LAION2B
Pre-computed concept frequencies of LAION2B-en can be found [here](https://biu365-my.sharepoint.com/:f:/g/personal/samueld_biu_ac_il/EpDy6A-591xFuqxkSQyyK4oBzQRRtAeOOFeQudyhgb62fw?e=PQPiZ3).## Norm-guided latent space exploration for text-to-image generation (NeurIPS 23)
> Dvir Samuel, Rami Ben-Ari, Nir Darshan, Haggai Maron, Gal Chechik
> Bar-Ilan University, OriginAI, NVIDIA Research>
>
> Text-to-image diffusion models show great potential in synthesizing a large variety of concepts in new compositions and scenarios. However, their latent seed space is still not well understood and has been shown to have an impact in generating new and rare concepts. Specifically, simple operations like interpolation and centroid finding work poorly with the standard Euclidean and spherical metrics in the latent space. This paper makes the observation that current training procedures make diffusion models biased toward inputs with a narrow range of norm values. This has strong implications for methods that rely on seed manipulation for image generation that can be further applied to few-shot and long-tail learning tasks. To address this issue, we propose a novel method for interpolating between two seeds and demonstrate that it defines a new non-Euclidean metric that takes into account a norm-based prior on seeds. We describe a simple yet efficient algorithm for approximating this metric and use it to further define centroids in the latent seed space. We show that our new interpolation and centroid evaluation techniques significantly enhance the generation of rare concept images. This further leads to state-of-the-art performance on few-shot and long-tail benchmarks, improving prior approach in terms of generation speed, image quality, and semantic content.
We propose a novel method, called **NAO (Norm-Aware Optimization)** for interpolating between two seeds. In contrast to Linear
Interpolation (LERP) or Spherical Linear Interpolation (SLERP), we formulate interpolation
as finding a likelihood-maximizing path in seed space according to the aforementioned prior.### Usage
NAO gives a better initialization point to SeedSelect, yielding significantly faster convergence without sacrificing accuracy or image quality.
Rare concepts can be generated in less than 5 optimization steps (~10 sec on a single A100 GPU).Run the following command to optimize the seed for the rare concept "tiger cat":
```
python main.py NAO_SeedSelect tiger_cat imgs/tiger_cat
```### Citation
If you use this code for your research, please cite the following work:
```
@inproceedings{Samuel2023SeedSelect,
title={Generating images of rare concepts using pre-trained diffusion models},
author={Dvir Samuel and Rami Ben-Ari and Simon Raviv and Nir Darshan and Gal Chechik},
year={2024},
journal={AAAI}
}@inproceedings{Samuel2023NAO,
title={Norm-guided latent space exploration for text-to-image generation},
author={Dvir Samuel and Rami Ben-Ari and Nir Darshan and Haggai Maron and Gal Chechik},
year={2023},
journal={NeurIPS}
}
```