https://github.com/huanngzh/ComfyUI-MVAdapter
Custom nodes for using MV-Adapter in ComfyUI.
https://github.com/huanngzh/ComfyUI-MVAdapter
Last synced: 4 months ago
JSON representation
Custom nodes for using MV-Adapter in ComfyUI.
- Host: GitHub
- URL: https://github.com/huanngzh/ComfyUI-MVAdapter
- Owner: huanngzh
- License: apache-2.0
- Created: 2024-12-01T06:31:18.000Z (5 months ago)
- Default Branch: main
- Last Pushed: 2024-12-05T15:20:47.000Z (4 months ago)
- Last Synced: 2024-12-05T16:26:12.894Z (4 months ago)
- Language: Python
- Homepage:
- Size: 739 KB
- Stars: 59
- Watchers: 3
- Forks: 0
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-comfyui - **ComfyUI-MVAdapter** - Adapter](https://github.com/huanngzh/MV-Adapter) into ComfyUI, allowing users to generate multi-view consistent images from text prompts or single images directly within the ComfyUI interface. (All Workflows Sorted by GitHub Stars)
README
# ComfyUI-MVAdapter
This extension integrates [MV-Adapter](https://github.com/huanngzh/MV-Adapter) into ComfyUI, allowing users to generate multi-view consistent images from text prompts or single images directly within the ComfyUI interface.
## 🔥 Feature Updates
* [2024-12-09] Support integration with SDXL LoRA
* [2024-12-02] Generate multi-view consistent images from text prompts or a single image## Installation
### From Source
* Clone or download this repository into your `ComfyUI/custom_nodes/` directory.
* Install the required dependencies by running `pip install -r requirements.txt`.## Notes
### Workflows
We provide the example workflows in `workflows` directory.
Note that our code depends on diffusers, and will automatically download the model weights from huggingface to the hf cache path at the first time. The `ckpt_name` in the node corresponds to the model name in huggingface, such as `stabilityai/stable-diffusion-xl-base-1.0`.
We also provide the nodes `Ldm**Loader` to support loading text-to-image models in `ldm` format. Please see the workflow files with the suffix `_ldm.json`.
### GPU Memory
If your GPU resources are limited, we recommend using the following configuration:
* Use [madebyollin/sdxl-vae-fp16-fix](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix) as VAE. If using ldm-format pipeline, remember to set `upcast_fp32` to `False`.

* Set `enable_vae_slicing` in the Diffusers Model Makeup node to `True`.

However, since SDXL is used as the base model, it still requires about 13G to 14G GPU memory.
## Usage
### Text to Multi-view Images
**With SDXL or other base models**

* `workflows/t2mv_sdxl_diffusers.json` for loading diffusers-format models
* `workflows/t2mv_sdxl_ldm.json` for loading ldm-format models**With LoRA**

`workflows/t2mv_sdxl_ldm_lora.json` for loading ldm-format models with LoRA for text-to-multi-view generation
### Image to Multi-view Images
**With SDXL or other base models**

* `workflows/i2mv_sdxl_diffusers.json` for loading diffusers-format models
* `workflows/i2mv_sdxl_ldm.json` for loading ldm-format models**With LoRA**

`workflows/i2mv_sdxl_ldm_lora.json` for loading ldm-format models with LoRA for image-to-multi-view generation