https://github.com/bytedance/DEADiff
[CVPR 2024] Official implementation of "DEADiff: An Efficient Stylization Diffusion Model with Disentangled Representations"
https://github.com/bytedance/DEADiff
Last synced: 8 months ago
JSON representation
[CVPR 2024] Official implementation of "DEADiff: An Efficient Stylization Diffusion Model with Disentangled Representations"
- Host: GitHub
- URL: https://github.com/bytedance/DEADiff
- Owner: bytedance
- License: apache-2.0
- Created: 2024-03-10T11:18:49.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-04-06T18:43:11.000Z (over 1 year ago)
- Last Synced: 2024-10-02T20:03:06.840Z (about 1 year ago)
- Language: Python
- Homepage:
- Size: 9.02 MB
- Stars: 214
- Watchers: 11
- Forks: 4
- Open Issues: 16
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-diffusion-categorized - [Code
README
# DEADiff: An Efficient Stylization Diffusion Model with Disentangled Representations (CVPR 2024)
_**[Tianhao Qi*](https://github.com/Tianhao-Qi/), [Shancheng Fang](https://tothebeginning.github.io/), [Yanze Wu✝](https://tothebeginning.github.io/), [Hongtao Xie✉](https://imcc.ustc.edu.cn/_upload/tpl/0d/13/3347/template3347/xiehongtao.html), [Jiawei Liu](https://scholar.google.com/citations?user=X21Fz-EAAAAJ&hl=en&authuser=1),
[Lang Chen](https://scholar.google.com/citations?user=h5xex20AAAAJ&hl=zh-CN), [Qian He](https://scholar.google.com/citations?view_op=list_works&hl=zh-CN&authuser=1&user=9rWWCgUAAAAJ), [Yongdong Zhang](https://scholar.google.com.hk/citations?user=hxGs4ukAAAAJ&hl=zh-CN)**_
(*Works done during the internship at ByteDance, ✝Project Lead, ✉Corresponding author)
From University of Science and Technology of China and ByteDance.
## 🔆 Introduction
**TL;DR:** We propose DEADiff, a generic method facilitating the synthesis of novel images that embody the style of a given reference image and adhere to text prompts.
### ⭐⭐ Stylized Text-to-Image Generation.
Stylized text-to-image results. Resolution: 512 x 512. (Compressed)
## 📝 Changelog
- __[2024.4.3]__: 🔥🔥 Release the inference code and pretrained checkpoint.
- __[2024.3.5]__: 🔥🔥 Release the project page.
## ⏳ TODO
- [x] Release the inference code.
- [ ] Release training data.
## ⚙️ Setup
```bash
conda create -n deadiff python=3.9.2
conda activate deadiff
conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.8 -c pytorch -c nvidia
pip install git+https://github.com/salesforce/LAVIS.git@20230801-blip-diffusion-edit
pip install -r requirements.txt
pip install -e .
```
## 💫 Inference
1) Download the pretrained model from [Hugging Face](https://huggingface.co/qth/DEADiff/tree/main) and put it under ./pretrained/.
2) Run the commands in terminal.
```python3
python3 scripts/app.py
```
The Gradio app allows you to transfer style from the reference image. Just try it for more details.
Prompt: "A curly-haired boy"

Prompt: "A robot"

Prompt: "A motorcycle"

## 📢 Disclaimer
We develop this repository for RESEARCH purposes, so it can only be used for personal/research/non-commercial purposes.
****
## ✈️ Citation
```bibtex
@article{qi2024deadiff,
title={DEADiff: An Efficient Stylization Diffusion Model with Disentangled Representations},
author={Qi, Tianhao and Fang, Shancheng and Wu, Yanze and Xie, Hongtao and Liu, Jiawei and Chen, Lang and He, Qian and Zhang, Yongdong},
journal={arXiv preprint arXiv:2403.06951},
year={2024}
}
```
## 📭 Contact
If your have any comments or questions, feel free to contact [qth@mail.ustc.edu.cn](qth@mail.ustc.edu.cn)