https://github.com/if-ai/ComfyUI-IF_LLM
Run Local and API LLMs, Features Gemini2 image generation, DEEPSEEK R1, QwenVL2.5, QWQ32B, Ollama, LlamaCPP LMstudio, Koboldcpp, TextGen, Transformers or via APIs Anthropic, Groq, OpenAI, Google Gemini, Mistral, xAI and create your own charcters assistants (SystemPrompts) with custom presets
https://github.com/if-ai/ComfyUI-IF_LLM
Last synced: 26 days ago
JSON representation
Run Local and API LLMs, Features Gemini2 image generation, DEEPSEEK R1, QwenVL2.5, QWQ32B, Ollama, LlamaCPP LMstudio, Koboldcpp, TextGen, Transformers or via APIs Anthropic, Groq, OpenAI, Google Gemini, Mistral, xAI and create your own charcters assistants (SystemPrompts) with custom presets
- Host: GitHub
- URL: https://github.com/if-ai/ComfyUI-IF_LLM
- Owner: if-ai
- License: mit
- Created: 2024-11-24T00:50:24.000Z (5 months ago)
- Default Branch: main
- Last Pushed: 2025-03-15T14:20:01.000Z (about 1 month ago)
- Last Synced: 2025-03-15T14:31:10.046Z (about 1 month ago)
- Language: Python
- Homepage:
- Size: 17 MB
- Stars: 34
- Watchers: 1
- Forks: 3
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-comfyui - **IF_LLM**
README
# ComfyUI-IF_AI_LLM
################# ATENTION ####################
*It Might comflict with IF_AI_tools so if you have
it installed please remove it before installing IF_LLM
I am working on adding this tools to IF_AI_tools
so you only need one or the other*
################################################ Video
[](https://youtu.be/0sR4hu98pDo?si=EhF24ugy7RpLvUjV)
Lighter version of ComfyUI-IF_AI_tools is a set of custom nodes to Run Local and API LLMs and LMMs, supports Ollama, LlamaCPP LMstudio, Koboldcpp, TextGen, Transformers or via APIs Anthropic, Groq, OpenAI, Google Gemini, Mistral, xAI and create your own profiles (SystemPrompts) with custom presets and muchmore

### Install Ollama
You can technically use any LLM API that you want, but for the best expirience install Ollama and set it up.
- Visit [ollama.com](https://ollama.com) for more information.To install Ollama models just open CMD or any terminal and type the run command follow by the model name such as
```powershell
ollama run llama3.2-vision
```
If you want to use omost
```bash
ollama run impactframes/dolphin_llama3_omost
```
if you need a good smol model
```bash
ollama run ollama run llama3.2
```Optionally Set enviromnet variables for any of your favourite LLM API keys "XAI_API_KEY", "GOOGLE_API_KEY", "ANTHROPIC_API_KEY", "MISTRAL_API_KEY", "OPENAI_API_KEY" or "GROQ_API_KEY" with those names or otherwise
it won't pick it up you can also use .env file to store your keys## Features
_[NEW]_ xAI Grok Vision, Mistral, Google Gemini exp 114, Anthropic 3.5 Haiku, OpenAI 01 preview
_[NEW]_ Wildcard System
_[NEW]_ Local Models Koboldcpp, TextGen, LlamaCPP, LMstudio, Ollama
_[NEW]_ Auto prompts auto generation for Image Prompt Maker runs jobs on batches automatically
_[NEW]_ Image generation with IF_PROMPTImaGEN via Dalle3
_[NEW]_ Endpoints xAI, Transformers,
_[NEW]_ IF_profiles System Prompts with Reasoning/Reflection/Reward Templates and custom presets
_[NEW]_ WF such as GGUF and FluxRedux- Gemini, Groq, Mistral, OpenAI, Anthropic, Google, xAI, Transformers, Koboldcpp, TextGen, LlamaCPP, LMstudio, Ollama
- Omost_tool the first tool
- Vision Models Haiku/GPT4oMini?Geminiflash/Qwen2-VL
- [Ollama-Omost]https://ollama.com/impactframes/dolphin_llama3_omost can be 2x to 3x faster than other Omost Models
LLama3 and Phi3 IF_AI Prompt mkr models released
`ollama run impactframes/llama3_ifai_sd_prompt_mkr_q4km:latest`
`ollama run impactframes/ifai_promptmkr_dolphin_phi3:latest`
https://huggingface.co/impactframes/llama3_if_ai_sdpromptmkr_q4km
https://huggingface.co/impactframes/ifai_promptmkr_dolphin_phi3_gguf
## Installation
1. Open the manager search for IF_LLM and install### Install ComfyUI-IF_AI_ImaGenPromptMaker -hardest way
1. Navigate to your ComfyUI `custom_nodes` folder, type `CMD` on the address bar to open a command prompt,
and run the following command to clone the repository:
```bash
git clone https://github.com/if-ai/ComfyUI-IF_LLM.git
```
OR
1. In ComfyUI protable version just dounle click `embedded_install.bat` or type `CMD` on the address bar on the newly created `custom_nodes\ComfyUI-IF_LLM` folder type
```bash
H:\ComfyUI_windows_portable\python_embeded\python.exe -m pip install -r requirements.txt
```
replace `C:\` for your Drive letter where you have the ComfyUI_windows_portable directory2. On custom environment activate the environment and move to the newly created ComfyUI-IF_LLM
```bash
cd ComfyUI-IF_LLM
python -m pip install -r requirements.txt
```If you want to use AWQ to save VRAM and up to 3x faster inference
you need to install triton and autoawq
```
pip install triton
pip install --no-deps --no-build-isolation autoawq
```
I also have precompiled wheels for
FA2 sageattention and trton for windows 10 for cu126 and pytorch 2.6.3 and python 12+
https://huggingface.co/impactframes/ComfyUI_desktop_wheels_win_cp12_cu126/tree/main
## Related Tools
- [IF_prompt_MKR](https://github.com/if-ai/IF_PROMPTImaGEN)
- A similar tool available for Stable Diffusion WebUI## Videos
None yet
## Example using normal Model
ancient Megastructure, small lone figure## Workflow Examples
You can try out these workflow examples directly in ComfyDeploy!| Workflow | Try It |
|--------------|---------||[CD_FLUX_LoRA](workflows/CD_FLUX_LoRA.json)|[](https://beta.app.comfydeploy.com/home?gpu=A10G&comfyui_version=a7fe0a94dee08754f97b0171e15c1f2271aa37be&timeout=15&nodes=if-ai%2FComfyUI-IF_LLM%40c80e379%2Ccubiq%2FComfyUI_essentials%4033ff89f&workflowLink=https%3A%2F%2Fraw.githubusercontent.com%2Fif-ai%2FIF-Animation-Workflows%2Frefs%2Fheads%2Fmain%2FCD_FLUX_LoRA.json)|
|[CD_HYVid_I2V_&_T2V_Native_IFLLM](workflows/CD_HYVid_I2V_%26_T2V_Native_IFLLM.json)|[](https://beta.app.comfydeploy.com/home?gpu=L40S&comfyui_version=a7fe0a94dee08754f97b0171e15c1f2271aa37be&timeout=15&nodes=if-ai%2FComfyUI-IF_LLM%40c80e379%2Crgthree%2Frgthree-comfy%405d771b8%2CJonseed%2FComfyUI-Detail-Daemon%4090e703d%2Ckijai%2FComfyUI-KJNodes%40a22b269%2Ccubiq%2FComfyUI_essentials%4033ff89f%2CTinyTerra%2FComfyUI_tinyterraNodes%40b292f8e%2Cchengzeyi%2FComfy-WaveSpeed%403db162b%2CTTPlanetPig%2FComfyui_TTP_Toolset%406dd3f35%2Ckijai%2FComfyUI-HunyuanVideoWrapper%409f50ed1%2CKosinkadink%2FComfyUI-VideoHelperSuite%403bfbd99%2CFannovel16%2FComfyUI-Frame-Interpolation%40c336f71%2Cfacok%2FComfyUI-HunyuanVideoMultiLora%407e3e344%2Ccity96%2FComfyUI_ExtraModels%4092f556e%2Cblepping%2FComfyUI-bleh%40850f840%2CjamesWalker55%2Fcomfyui-various%4036454f9&workflowLink=https%3A%2F%2Fraw.githubusercontent.com%2Fif-ai%2FComfyUI-IF_LLM%2Fmain%2Fworkflows%2FCD_HYVid_I2V_%26_T2V_Native_IFLLM.json)|
|[CD_HYVid_I2V_&_T2V_i2VLora_Native](workflows/CD_HYVid_I2V_%26_T2V_i2VLora_Native.json)|[](https://beta.app.comfydeploy.com/home?gpu=l40s&comfyui_version=a7fe0a94dee08754f97b0171e15c1f2271aa37be&timeout=15&nodes=if-ai/ComfyUI-IF_LLM%40c80e379%2Crgthree/rgthree-comfy%405d771b8%2CJonseed/ComfyUI-Detail-Daemon%4090e703d%2Ckijai/ComfyUI-KJNodes%40a22b269%2Ccubiq/ComfyUI_essentials%4033ff89f%2CTinyTerra/ComfyUI_tinyterraNodes%40b292f8e%2Cchengzeyi/Comfy-WaveSpeed%403db162b%2CTTPlanetPig/Comfyui_TTP_Toolset%406dd3f35%2Ckijai/ComfyUI-HunyuanVideoWrapper%409f50ed1%2CKosinkadink/ComfyUI-VideoHelperSuite%403bfbd99%2CFannovel16/ComfyUI-Frame-Interpolation%40c336f71%2Cfacok/ComfyUI-HunyuanVideoMultiLora%407e3e344&workflowLink=https%3A//raw.githubusercontent.com/if-ai/ComfyUI-IF_LLM/main/workflows/CD_HYVid_I2V_%26_T2V_i2VLora_Native.json)|
|[CD_HYVid_I2V_Lora_KjWrapper](workflows/CD_HYVid_I2V_Lora_KjWrapper.json)|[](https://beta.app.comfydeploy.com/home?gpu=l40s&comfyui_version=a7fe0a94dee08754f97b0171e15c1f2271aa37be&timeout=15&nodes=if-ai/ComfyUI-IF_LLM%40c80e379%2Crgthree/rgthree-comfy%405d771b8%2CJonseed/ComfyUI-Detail-Daemon%4090e703d%2Ckijai/ComfyUI-KJNodes%40a22b269%2Ccubiq/ComfyUI_essentials%4033ff89f%2CTinyTerra/ComfyUI_tinyterraNodes%40b292f8e%2Cchengzeyi/Comfy-WaveSpeed%403db162b%2CTTPlanetPig/Comfyui_TTP_Toolset%406dd3f35%2Ckijai/ComfyUI-HunyuanVideoWrapper%409f50ed1%2CKosinkadink/ComfyUI-VideoHelperSuite%403bfbd99%2CFannovel16/ComfyUI-Frame-Interpolation%40c336f71%2Cfacok/ComfyUI-HunyuanVideoMultiLora%407e3e344&workflowLink=https%3A//raw.githubusercontent.com/if-ai/ComfyUI-IF_LLM/main/workflows/CD_HYVid_I2V_Lora_KjWrapper.json)|## TODO
- [ ] IMPROVED PROFILES
- [ ] OMNIGEN
- [ ] QWENFLUX
- [ ] VIDEOGEN
- [ ] AUDIOGEN## Support
If you find this tool useful, please consider supporting my work by:
- Starring the repository on GitHub: [ComfyUI-IF_AI_tools](https://github.com/if-ai/ComfyUI-IF_AI_tools)
- Subscribing to my YouTube channel: [Impact Frames](https://youtube.com/@impactframes?si=DrBu3tOAC2-YbEvc)
- Follow me on X: [Impact Frames X](https://x.com/impactframesX)
Thank You!
![]()