https://github.com/smthemex/ComfyUI_PhotoDoodle
PhotoDoodle: Learning Artistic Image Editing from Few-Shot Pairwise Data,you can use it in comfyUI
https://github.com/smthemex/ComfyUI_PhotoDoodle
Last synced: about 2 months ago
JSON representation
PhotoDoodle: Learning Artistic Image Editing from Few-Shot Pairwise Data,you can use it in comfyUI
- Host: GitHub
- URL: https://github.com/smthemex/ComfyUI_PhotoDoodle
- Owner: smthemex
- License: mit
- Created: 2025-02-27T05:45:15.000Z (about 2 months ago)
- Default Branch: main
- Last Pushed: 2025-02-27T05:50:02.000Z (about 2 months ago)
- Last Synced: 2025-02-27T06:48:37.366Z (about 2 months ago)
- Size: 273 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-comfyui - **ComfyUI_PhotoDoodle** - Shot Pairwise Data,you can use it in comfyUI (All Workflows Sorted by GitHub Stars)
README
# PhotoDoodle
[PhotoDoodle](https://github.com/showlab/PhotoDoodle) it a method about 'Learning Artistic Image Editing from Few-Shot Pairwise Data',you can use it in comfyUI# 1.Installation
-----
In the ./ComfyUI /custom_node directory, run the following:
```
git clone https://github.com/smthemex/ComfyUI_PhotoDoodle
```
# 2.requirements
----
```
pip install -r requirements.txt
```
* If Vram <=24G pip install mmgp ,这个方法适配的显存是40G,所以4090及以下显卡都要按照mmpg,这样才能跑得快,4090模型加载菜单,可以选0,或者1或者2# 3.checkpoints
* 3.1 mode use flux dev single checkpoints(fp8 or fp16) or repo or unet+ae+comfyui T5XXX ,三种选择,使用flux dev的fp8或fp16单体模型 或者使用repo,或者使用flux unet+ae+comfy的T5双clip
```
├── ComfyUI/models/diffusion_models/
| ├── flux1-kj-dev-fp8.safetensors # if use fp8 unet 11G unet+vae+clip方法不推荐,因为更容易爆显存
| ├── flux1-dev-fp8.safetensors # if use fp8 single 16G comfy官方单体模型或者flux官方单体模型,推荐,开启mmgp不会爆显存,虽然慢
```
* 3.2 more lora download from [here](https://huggingface.co/nicolaus-huang/PhotoDoodle/tree/main)
```
├── ComfyUI/models/loras/
| ├── pretrain.safetensors # 必须要
| ├── skscloudsketch.safetensors # 选你喜欢的lora
```# 4 Example
* if use single files,vae must choice "none" #flux1-dev-fp8.safetensors 16G 单体16G模型,内置clip和vae那种,vae必须选择"none"

* if use unet #11G 单体unet模型,没有内置clip和vae的,所以必须要连双clip和选vae

* if use repo 'black-forest-labs/FLUX.1-dev' or C:/youpath/black-forest-labs/FLUX.1-dev 如果使用repo可以用自动下载或本地
# 5. Acknowledgments
1. Thanks to **[Yuxuan Zhang](https://xiaojiu-z.github.io/YuxuanZhang.github.io/)** and **[Hailong Guo](mailto:[email protected])** for providing the code base.
2. Thanks to **[Diffusers](https://github.com/huggingface/diffusers)** for the open-source project.# Citation
```
@misc{huang2025photodoodlelearningartisticimage,
title={PhotoDoodle: Learning Artistic Image Editing from Few-Shot Pairwise Data},
author={Shijie Huang and Yiren Song and Yuxuan Zhang and Hailong Guo and Xueyin Wang and Mike Zheng Shou and Jiaming Liu},
year={2025},
eprint={2502.14397},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2502.14397},
}
```