Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/Yujun-Shi/DragDiffusion
[CVPR2024, Highlight] Official code for DragDiffusion
https://github.com/Yujun-Shi/DragDiffusion
artificial-intelligence cvpr2024 diffusion-models dragdiffusion draggan image-editing
Last synced: 13 days ago
JSON representation
[CVPR2024, Highlight] Official code for DragDiffusion
- Host: GitHub
- URL: https://github.com/Yujun-Shi/DragDiffusion
- Owner: Yujun-Shi
- License: apache-2.0
- Created: 2023-07-08T06:11:13.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-01-29T13:25:43.000Z (10 months ago)
- Last Synced: 2024-10-15T21:42:18.790Z (28 days ago)
- Topics: artificial-intelligence, cvpr2024, diffusion-models, dragdiffusion, draggan, image-editing
- Language: Python
- Homepage: https://yujun-shi.github.io/projects/dragdiffusion.html
- Size: 25.3 MB
- Stars: 1,147
- Watchers: 26
- Forks: 85
- Open Issues: 33
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-diffusion-categorized - [Code
README
DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing
Yujun Shi
Chuhui Xue
Jun Hao Liew
Jiachun Pan
Hanshu Yan
Wenqing Zhang
Vincent Y. F. Tan
Song Bai
## Disclaimer
This is a research project, NOT a commercial product. Users are granted the freedom to create images using this tool, but they are expected to comply with local laws and utilize it in a responsible manner. The developers do not assume any responsibility for potential misuse by users.## News and Update
* [Jan 29th] Update to support diffusers==0.24.0!
* [Oct 23rd] Code and data of DragBench are released! Please check README under "drag_bench_evaluation" for details.
* [Oct 16th] Integrate [FreeU](https://chenyangsi.top/FreeU/) when dragging generated image.
* [Oct 3rd] Speeding up LoRA training when editing real images. (**Now only around 20s on A100!**)
* [Sept 3rd] v0.1.0 Release.
* Enable **Dragging Diffusion-Generated Images.**
* Introducing a new guidance mechanism that **greatly improve quality of dragging results.** (Inspired by [MasaCtrl](https://ljzycmd.github.io/projects/MasaCtrl/))
* Enable Dragging Images with arbitrary aspect ratio
* Adding support for DPM++Solver (Generated Images)
* [July 18th] v0.0.1 Release.
* Integrate LoRA training into the User Interface. **No need to use training script and everything can be conveniently done in UI!**
* Optimize User Interface layout.
* Enable using better VAE for eyes and faces (See [this](https://stable-diffusion-art.com/how-to-use-vae/))
* [July 8th] v0.0.0 Release.
* Implement Basic function of DragDiffusion## Installation
It is recommended to run our code on a Nvidia GPU with a linux system. We have not yet tested on other configurations. Currently, it requires around 14 GB GPU memory to run our method. We will continue to optimize memory efficiency
To install the required libraries, simply run the following command:
```
conda env create -f environment.yaml
conda activate dragdiff
```## Run DragDiffusion
To start with, in command line, run the following to start the gradio user interface:
```
python3 drag_ui.py
```You may check our [GIF above](https://github.com/Yujun-Shi/DragDiffusion/blob/main/release-doc/asset/github_video.gif) that demonstrate the usage of UI in a step-by-step manner.
Basically, it consists of the following steps:
### Case 1: Dragging Input Real Images
#### 1) train a LoRA
* Drop our input image into the left-most box.
* Input a prompt describing the image in the "prompt" field
* Click the "Train LoRA" button to train a LoRA given the input image#### 2) do "drag" editing
* Draw a mask in the left-most box to specify the editable areas.
* Click handle and target points in the middle box. Also, you may reset all points by clicking "Undo point".
* Click the "Run" button to run our algorithm. Edited results will be displayed in the right-most box.### Case 2: Dragging Diffusion-Generated Images
#### 1) generate an image
* Fill in the generation parameters (e.g., positive/negative prompt, parameters under Generation Config & FreeU Parameters).
* Click "Generate Image".#### 2) do "drag" on the generated image
* Draw a mask in the left-most box to specify the editable areas
* Click handle points and target points in the middle box.
* Click the "Run" button to run our algorithm. Edited results will be displayed in the right-most box.## License
Code related to the DragDiffusion algorithm is under Apache 2.0 license.## BibTeX
If you find our repo helpful, please consider leaving a star or cite our paper :)
```bibtex
@article{shi2023dragdiffusion,
title={DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing},
author={Shi, Yujun and Xue, Chuhui and Pan, Jiachun and Zhang, Wenqing and Tan, Vincent YF and Bai, Song},
journal={arXiv preprint arXiv:2306.14435},
year={2023}
}
```## Contact
For any questions on this project, please contact [Yujun](https://yujun-shi.github.io/) ([email protected])## Acknowledgement
This work is inspired by the amazing [DragGAN](https://vcai.mpi-inf.mpg.de/projects/DragGAN/). The lora training code is modified from an [example](https://github.com/huggingface/diffusers/blob/v0.17.1/examples/dreambooth/train_dreambooth_lora.py) of diffusers. Image samples are collected from [unsplash](https://unsplash.com/), [pexels](https://www.pexels.com/zh-cn/), [pixabay](https://pixabay.com/). Finally, a huge shout-out to all the amazing open source diffusion models and libraries.## Related Links
* [Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold](https://vcai.mpi-inf.mpg.de/projects/DragGAN/)
* [MasaCtrl: Tuning-free Mutual Self-Attention Control for Consistent Image Synthesis and Editing](https://ljzycmd.github.io/projects/MasaCtrl/)
* [Emergent Correspondence from Image Diffusion](https://diffusionfeatures.github.io/)
* [DragonDiffusion: Enabling Drag-style Manipulation on Diffusion Models](https://mc-e.github.io/project/DragonDiffusion/)
* [FreeDrag: Point Tracking is Not You Need for Interactive Point-based Image Editing](https://lin-chen.site/projects/freedrag/)## Common Issues and Solutions
1) For users struggling in loading models from huggingface due to internet constraint, please 1) follow this [links](https://zhuanlan.zhihu.com/p/475260268) and download the model into the directory "local\_pretrained\_models"; 2) Run "drag\_ui.py" and select the directory to your pretrained model in "Algorithm Parameters -> Base Model Config -> Diffusion Model Path".