https://github.com/tritant/ComfyUI_Flux_Lora_Merger
https://github.com/tritant/ComfyUI_Flux_Lora_Merger
Last synced: 2 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/tritant/ComfyUI_Flux_Lora_Merger
- Owner: tritant
- License: gpl-3.0
- Created: 2025-04-15T11:25:50.000Z (3 months ago)
- Default Branch: main
- Last Pushed: 2025-04-15T12:13:18.000Z (3 months ago)
- Last Synced: 2025-04-15T12:29:10.774Z (3 months ago)
- Language: Python
- Size: 21.5 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-comfyui - **Flux LoRA Merger**
README
# Flux LoRA Merger — Custom Node for ComfyUI
A custom ComfyUI node to **merge up to 4 LoRA models into a Flux.1-Dev UNet**
⚠️ If you experience VRAM issues (OOM) when saving, try using the sequential strategy instead of additive or average. It avoids direct weight modifications and makes saving more memory-efficient.
---
## ✨ Features
- ✅ Merge up to **4 LoRA models**
- ✅ Compatible with UNet models in **FP8 / FP16 / FP32**
- ✅ Three fusion strategies:
- `additive`: weighted sum of LoRA deltas
- `average`: average of LoRA deltas
- `sequential`: apply one after another
- ✅ Option to **save the final merged model** in `.safetensors`---
## 📥 Inputs
| Parameter | Type | Description |
|------------------|----------|-------------|
| `unet_model` | `MODEL` | The base UNet model to apply LoRAs to |
| `merge_strategy` | `CHOICE` | `"additive"`, `"average"`, or `"sequential"` |
| `enable_loraX` | `BOOLEAN`| Whether to enable the corresponding LoRA |
| `loraX` | `STRING` | Filename of the LoRA (from your `loras/` folder) |
| `loraX_weight` | `FLOAT` | Weight multiplier for the LoRA |
| `save_model` | `BOOLEAN`| Save the merged model to `output/` |
| `save_filename` | `STRING` | Custom name for the saved `.safetensors` file |---
## 📤 Outputs
| Output | Type | Description |
|------------------|----------|-------------|
| `model` | `MODEL` | The merged UNet model |