https://github.com/zer0int/comfyui-llama3-layer-shuffle-prompting
Shuffle LLama-3.2 layers and have it prompt an image. Node works with any model - Flux, SD3, SDXL...
https://github.com/zer0int/comfyui-llama3-layer-shuffle-prompting
comfyu-nodes comfyui experimental flux instruct layer llama3 llama3-2 llm prompting research sd3 sdxl shuffle
Last synced: 6 months ago
JSON representation
Shuffle LLama-3.2 layers and have it prompt an image. Node works with any model - Flux, SD3, SDXL...
- Host: GitHub
- URL: https://github.com/zer0int/comfyui-llama3-layer-shuffle-prompting
- Owner: zer0int
- Created: 2024-10-15T19:39:17.000Z (12 months ago)
- Default Branch: CLIP-vision
- Last Pushed: 2024-10-16T22:30:08.000Z (12 months ago)
- Last Synced: 2024-12-03T16:57:01.260Z (10 months ago)
- Topics: comfyu-nodes, comfyui, experimental, flux, instruct, layer, llama3, llama3-2, llm, prompting, research, sd3, sdxl, shuffle
- Language: Python
- Homepage:
- Size: 12.7 KB
- Stars: 6
- Watchers: 3
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
## ComfyUI Shuffle Node for LLama-3.2-1B & prompting. 🤖💫
### Shuffle LLama's layers, have the AI prompt an image. Creatively.- Available models: `meta-llama/Llama-3.2-1B` or `meta-llama/Llama-3.2-1B-Instruct`.
- Simply put `ComfyUI-LLama3shuffle` into `ComfyUI/custom_nodes`.
- Now you'll find the Node in the -> "zer0int" group.
- Or use the provided workflow (for Flux.1) (but the node works for prompting *any* model!).
- Allows shuffling of Layers (Attn, MLP, Full Layer) of a model, then generates a -> Prompt.
- ⚠️ Gated Model. You'll have to sign [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) + Login via HuggingFace CLI to access.
- You can also use other (larger) models, such as `meta-llama/Llama-3.2-3B`, by replacing or adding it to my code. The LLama 3.2 models should all work fine. They just have more layers you can then specify in the `shuffle_layer_range` in the ComfyUI node.
- ✅ You can also replace `meta-llama/Llama-3.2-1B` / `-Instruct` with a fine-tune from HuggingFace - as long as it is the same model (LLama 3.2), it should work fine (no matter if the fine-tune is gated or not; I don't check for that in the code, it happens on the HuggingFace backend entirely).
- ‼️ Disclaimer: While this modification does not target anything specifically, shuffling the layers in a transformer may lead to unexpected / unintended consequences. DO NOT DEPLOY + blame me (or Meta) for it. RESEARCH / personal learning use ONLY. ⚠️----
Example output: Translation of prompt + adding details. No shuffling; normal model:
And now, with Attn shuffling -- pure GPT-3 style madness:

-----
Example output: Llama-3.2-1B-Instruct with shuffled Attention in Layers 6,7,8. Quote:
- 🤖: I am the creation of the scene you have the ability to see. I am a dark, sleek model of efficiency. Here is your image for efficiency.