Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
https://github.com/adobe-research/custom-diffusion
computer-vision customization diffusion-models few-shot fine-tuning pytorch text-to-image-generation
Last synced: about 6 hours ago
JSON representation
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
- Host: GitHub
- URL: https://github.com/adobe-research/custom-diffusion
- Owner: adobe-research
- License: other
- Created: 2022-12-08T19:18:41.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2023-12-20T12:00:48.000Z (about 1 year ago)
- Last Synced: 2024-12-14T10:02:44.522Z (7 days ago)
- Topics: computer-vision, customization, diffusion-models, few-shot, fine-tuning, pytorch, text-to-image-generation
- Language: Python
- Homepage: https://www.cs.cmu.edu/~custom-diffusion
- Size: 60.5 MB
- Stars: 1,876
- Watchers: 33
- Forks: 139
- Open Issues: 52
-
Metadata Files:
- Readme: README.md
- License: LICENSE.md
Awesome Lists containing this project
- awesome-diffusion-categorized - [Code
- StarryDivineSky - adobe-research/custom-diffusion - 20) 微调文本到图像的扩散模型,例如稳定扩散。我们的方法速度很快(在 2 个 A100 GPU 上需要 ~6 分钟),因为它只微调交叉注意力层中的模型参数子集,即键和值投影矩阵。这也将每个额外概念的额外存储空间减少到 75MB。我们的方法进一步允许您使用多个概念的组合,例如新对象 + 新艺术风格、多个新对象和新对象 + 新类别。 (其他_机器视觉 / 网络服务_其他)
README
# Custom Diffusion
### [website](https://www.cs.cmu.edu/~custom-diffusion/) | [paper](http://arxiv.org/abs/2212.04488)
**[NEW!]** Custom Diffusion is now supported in diffusers. Please [refer](https://github.com/huggingface/diffusers/tree/main/examples/custom_diffusion) here for training and inference details.
**[NEW!]** CustomConcept101 dataset. We release a new dataset of 101 concepts along with their evaluation prompts. For more details please refer [here](customconcept101/README.md).
**[NEW!]** Custom Diffusion with SDXL. Diffusers code now with updated diffusers==0.21.4.
[Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion) allows you to fine-tune text-to-image diffusion models, such as [Stable Diffusion](https://github.com/CompVis/stable-diffusion), given a few images of a new concept (~4-20). Our method is fast (~6 minutes on 2 A100 GPUs) as it fine-tunes only a subset of model parameters, namely key and value projection matrices, in the cross-attention layers. This also reduces the extra storage for each additional concept to 75MB.
Our method further allows you to use a combination of multiple concepts such as new object + new artistic style, multiple new objects, and new object + new category. See [multi-concept results](#multi-concept-results) for more visual results.
***Multi-Concept Customization of Text-to-Image Diffusion***
[Nupur Kumari](https://nupurkmr9.github.io/), [Bingliang Zhang](https://zhangbingliang2019.github.io), [Richard Zhang](https://richzhang.github.io/), [Eli Shechtman](https://research.adobe.com/person/eli-shechtman/), [Jun-Yan Zhu](https://www.cs.cmu.edu/~junyanz/)
In CVPR 2023## Results
All our results are based on fine-tuning [stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original) model.
We show results on various categories of images, including scene, pet, personal toy, and style, and with a varying number of training samples.
For more generations and comparisons with concurrent methods, please refer to our [webpage](https://www.cs.cmu.edu/~custom-diffusion/) and [gallery](https://www.cs.cmu.edu/~custom-diffusion/results.html).### Single-Concept Results
### Multi-Concept Results
## Method Details
Given the few user-provided images of a concept, our method augments a pre-trained text-to-image diffusion model, enabling new generations of the concept in unseen contexts.
We fine-tune a small subset of model weights, namely the key and value mapping from text to latent features in the cross-attention layers of the diffusion model.
Our method also uses a small set of regularization images (200) to prevent overfitting. For personal categories, we add a new modifier token V* in front of the category name, e.g., V* dog. For multiple concepts, we jointly train on the dataset for the two concepts. Our method also enables the merging of two fine-tuned models using optimization. For more details, please refer to our [paper](https://arxiv.org/abs/2212.04488).## Getting Started
```
git clone https://github.com/adobe-research/custom-diffusion.git
cd custom-diffusion
git clone https://github.com/CompVis/stable-diffusion.git
cd stable-diffusion
conda env create -f environment.yaml
conda activate ldm
pip install clip-retrieval tqdm
```Our code was developed on the following commit `#21f890f9da3cfbeaba8e2ac3c425ee9e998d5229` of [stable-diffusion](https://github.com/CompVis/stable-diffusion).
Download the stable-diffusion model checkpoint
`wget https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt`
For more details, please refer [here](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original).**Dataset:** we release some of the datasets used in paper [here](https://www.cs.cmu.edu/~custom-diffusion/assets/data.zip).
Images taken from UnSplash are under [UnSplash LICENSE](https://unsplash.com/license). Moongate dataset can be downloaded from [here](https://github.com/odegeasslbc/FastGAN-pytorch).**Models:** all our models can be downloaded from [here](https://www.cs.cmu.edu/~custom-diffusion/assets/models/).
### Single-Concept Fine-tuning
**Real images as regularization**
```
## download dataset
wget https://www.cs.cmu.edu/~custom-diffusion/assets/data.zip
unzip data.zip## run training (30 GB on 2 GPUs)
bash scripts/finetune_real.sh "cat" data/cat real_reg/samples_cat cat finetune_addtoken.yaml## save updated model weights
python src/get_deltas.py --path logs/ --newtoken 1## sample
python sample.py --prompt " cat playing with a ball" --delta_ckpt logs//checkpoints/delta_epoch\=000004.ckpt --ckpt
```The `` is the path to the pretrained `sd-v1-4.ckpt` model. Our results in the paper are not based on the [clip-retrieval](https://github.com/rom1504/clip-retrieval) for retrieving real images as the regularization samples. But this also leads to similar results.
**Generated images as regularization**
```
bash scripts/finetune_gen.sh "cat" data/cat gen_reg/samples_cat cat finetune_addtoken.yaml
```### Multi-Concept Fine-tuning
**Joint training**
```
## run training (30 GB on 2 GPUs)
bash scripts/finetune_joint.sh "wooden pot" data/wooden_pot real_reg/samples_wooden_pot \
"cat" data/cat real_reg/samples_cat \
wooden_pot+cat finetune_joint.yaml## save updated model weights
python src/get_deltas.py --path logs/ --newtoken 2## sample
python sample.py --prompt "the cat sculpture in the style of a wooden pot" --delta_ckpt logs//checkpoints/delta_epoch\=000004.ckpt --ckpt
```**Optimization based weights merging**
Given two fine-tuned model weights `delta_ckpt1` and `delta_ckpt2` for any two categories, the weights can be merged to create a single model as shown below.
```
python src/composenW.py --paths + --categories "wooden pot+cat" --ckpt## sample
python sample.py --prompt "the cat sculpture in the style of a wooden pot" --delta_ckpt optimized_logs//checkpoints/delta_epoch\=000000.ckpt --ckpt
```### Training using Diffusers library
**[NEW!]** Custom Diffusion is also supported in diffusers now. Please [refer](https://github.com/huggingface/diffusers/tree/main/examples/custom_diffusion) here for training and inference details.
```
## install requirements
pip install accelerate>=0.24.1
pip install modelcards
pip install transformers>=4.31.0
pip install deepspeed
pip install diffusers==0.21.4
accelerate config
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
```**Single-Concept fine-tuning**
```
## launch training script (2 GPUs recommended, increase --max_train_steps to 500 if 1 GPU)accelerate launch src/diffusers_training.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--instance_data_dir=./data/cat \
--class_data_dir=./real_reg/samples_cat/ \
--output_dir=./logs/cat \
--with_prior_preservation --real_prior --prior_loss_weight=1.0 \
--instance_prompt="photo of a cat" \
--class_prompt="cat" \
--resolution=512 \
--train_batch_size=2 \
--learning_rate=1e-5 \
--lr_warmup_steps=0 \
--max_train_steps=250 \
--num_class_images=200 \
--scale_lr --hflip \
--modifier_token ""## sample
python src/diffusers_sample.py --delta_ckpt logs/cat/delta.bin --ckpt "CompVis/stable-diffusion-v1-4" --prompt " cat playing with a ball"
```You can also use `--enable_xformers_memory_efficient_attention` and enable `fp16` during `accelerate config` for faster training with lower VRAM requirement. To train with SDXL use `diffusers_training_sdxl.py` with `MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0"`.
**Multi-Concept fine-tuning**
Provide a [json](assets/concept_list.json) file with the info about each concept, similar to [this](https://github.com/ShivamShrirao/diffusers/blob/main/examples/dreambooth/train_dreambooth.py).
```
## launch training script (2 GPUs recommended, increase --max_train_steps to 1000 if 1 GPU)accelerate launch src/diffusers_training.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--output_dir=./logs/cat_wooden_pot \
--concepts_list=./assets/concept_list.json \
--with_prior_preservation --real_prior --prior_loss_weight=1.0 \
--resolution=512 \
--train_batch_size=2 \
--learning_rate=1e-5 \
--lr_warmup_steps=0 \
--max_train_steps=500 \
--num_class_images=200 \
--scale_lr --hflip \
--modifier_token "+"## sample
python src/diffusers_sample.py --delta_ckpt logs/cat_wooden_pot/delta.bin --ckpt "CompVis/stable-diffusion-v1-4" --prompt " cat sitting inside a wooden pot and looking up"
```**Optimization based weights merging for Multi-Concept**
Given two fine-tuned model weights `delta1.bin` and `delta2.bin` for any two categories, the weights can be merged to create a single model as shown below.
```
python src/diffusers_composenW.py --paths + --categories "wooden pot+cat" --ckpt "CompVis/stable-diffusion-v1-4"## sample
python src/diffusers_sample.py --delta_ckpt optimized_logs//delta.bin --ckpt "CompVis/stable-diffusion-v1-4" --prompt " cat sitting inside a wooden pot and looking up"
```The diffuser training code is modified from the following [DreamBooth]( https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth.py), [Textual Inversion](https://github.com/huggingface/diffusers/blob/main/examples/textual_inversion/textual_inversion.py) training scripts. For more details on how to setup accelarate please refer [here](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth).
### Fine-tuning on human faces
For fine-tuning on human faces, we recommend `learning_rate=5e-6` and `max_train_steps=750` in the above diffuser training script or using `finetune_face.yaml` config in stable-diffusion training script.
We observe better results with a lower learning rate, longer training, and more images for human faces compared to other categories shown in our paper. With fewer images, fine-tuning all parameters in the cross-attention is slightly better, which can be enabled with `--freeze_model "crossattn"`.
Example results on fine-tuning with 14 close-up photos of [Richard Zhang](https://richzhang.github.io/) with the diffusers training script.
### Model compression
```
python src/compress.py --delta_ckpt --ckpt## sample
python sample.py --prompt " cat playing with a ball" --delta_ckpt logs//checkpoints/compressed_delta_epoch\=000004.ckpt --ckpt --compress
```Sample generations with different level of compression. By default our code saves the low-rank approximation with top 60% singular values to result in ~15 MB models.
### Checkpoint conversions for stable-diffusion-v1-4
* From diffusers `delta.bin` to CompVis `delta_model.ckpt`.
```
python src/convert.py --delta_ckpt /delta.bin --ckpt --mode diffuser-to-compvis
# sample
python sample.py --delta_ckpt /delta_model.ckpt --ckpt --prompt --config configs/custom-diffusion/finetune_addtoken.yaml
```* From diffusers `delta.bin` to [stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) checkpoint.
```
python src/convert.py --delta_ckpt /delta.bin --ckpt --mode diffuser-to-webui
# launch UI in stable-diffusion-webui directory
bash webui.sh --embeddings-dir /webui/embeddings --ckpt /webui/model.ckpt
```* From CompVis `delta_model.ckpt` to diffusers `delta.bin`.
```
python src/convert.py --delta_ckpt /delta_model.ckpt --ckpt --mode compvis-to-diffuser
# sample
python src/diffusers_sample.py --delta_ckpt /delta.bin --ckpt "CompVis/stable-diffusion-v1-4" --prompt
```* From CompVis `delta_model.ckpt` [stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) checkpoint.
```
python src/convert.py --delta_ckpt /delta_model.ckpt --ckpt --mode compvis-to-webui
# launch UI in stable-diffusion-webui directory
bash webui.sh --embeddings-dir /webui/embeddings --ckpt /webui/model.ckpt
```
Converted checkpoints are saved in the `` of the original checkpoints.## References
```
@article{kumari2022customdiffusion,
title={Multi-Concept Customization of Text-to-Image Diffusion},
author={Kumari, Nupur and Zhang, Bingliang and Zhang, Richard and Shechtman, Eli and Zhu, Jun-Yan},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2023}
}
```## Acknowledgments
We are grateful to Nick Kolkin, David Bau, Sheng-Yu Wang, Gaurav Parmar, John Nack, and Sylvain Paris for their helpful comments and discussion, and to Allie Chang, Chen Wu, Sumith Kulal, Minguk Kang, Yotam Nitzan, and Taesung Park for proofreading the draft. We also thank Mia Tang and Aaron Hertzmann for sharing their artwork. Some of the datasets are downloaded from Unsplash. This work was partly done by Nupur Kumari during the Adobe internship. The work is partly supported by Adobe Inc.