An open API service indexing awesome lists of open source software.

https://github.com/TTPlanetPig/TTP_Comfyui_FramePack_SE

Provide Comfyui Support on FramePack Start and End image reference
https://github.com/TTPlanetPig/TTP_Comfyui_FramePack_SE

Last synced: about 1 month ago
JSON representation

Provide Comfyui Support on FramePack Start and End image reference

Awesome Lists containing this project

README

        

# TTP_ComfyUI_FramePack_SE

**Provide ComfyUI support for FramePack start-and-end image reference**

---

First, thanks to [lvmin Zhang](https://github.com/lllyasviel) for the **FramePack** application—it offers a very interesting approach that makes video generation easier and more accessible.
Original repository:
https://github.com/lllyasviel/FramePack

---

## Changes in This Project

- Based on the original repo, we inject a simple `end_image` to enable start-and-end frame references
- Check out my PR for the full diff:
https://github.com/lllyasviel/FramePack/pull/167
- This PR addresses the “frozen background” criticism

---

## Issues Encountered

1. When the start and end frames differ too greatly, the model struggles and often produces “slideshow-style” cuts.
2. Although it’s been attributed to “needing further training,” I believe:
- The Hunyuan model handles Human static poses well but lacks smooth dynamic transitions
- With lvmin Zhang’s improved the Hunyuan base model, we can unlock more possibilities

---

## My Optimizations

- **Tweaked the generation pipeline code** to strike a balance between variation and frame consistency
- Tested and tuned several parameters to ensure smooth transitions in most scenarios
- The comfyui code is referred from [HM-RunningHub/ComfyUI_RH_FramePack](https://github.com/HM-RunningHub/ComfyUI_RH_FramePack) and modified






DSC07174
DSC07189

## Model Download & Location

You can either download each model manually from Hugging Face or use the bundled model package.

### 1. Manual Download

- **HunyuanVideo**
[https://huggingface.co/hunyuanvideo-community/HunyuanVideo/tree/main](https://huggingface.co/hunyuanvideo-community/HunyuanVideo/tree/main)
- **Flux Redux BFL**
[https://huggingface.co/lllyasviel/flux_redux_bfl/tree/main](https://huggingface.co/lllyasviel/flux_redux_bfl/tree/main)
- **FramePackI2V**
[https://huggingface.co/lllyasviel/FramePackI2V_HY/tree/main](https://huggingface.co/lllyasviel/FramePackI2V_HY/tree/main)

- Baidu: https://pan.baidu.com/s/17h23yvJXa6AczGLcybsd_A?pwd=mbqa
- Quark: https://pan.quark.cn/s/80ff4f39c15b

### 2. Model location

Copy the contents into the `models/` folder, information ref from [HM-RunningHub/ComfyUI_RH_FramePack](https://github.com/HM-RunningHub/ComfyUI_RH_FramePack)

```text
comfyui/models/
├── flux_redux_bfl
│ ├── feature_extractor/
│ │ └── preprocessor_config.json
│ ├── image_embedder/
│ │ ├── config.json
│ │ └── diffusion_pytorch_model.safetensors
│ ├── image_encoder/
│ │ ├── config.json
│ │ └── model.safetensors
│ ├── model_index.json
│ └── README.md
├── FramePackI2V_HY
│ ├── config.json
│ ├── diffusion_pytorch_model-00001-of-00003.safetensors
│ ├── diffusion_pytorch_model-00002-of-00003.safetensors
│ ├── diffusion_pytorch_model-00003-of-00003.safetensors
│ ├── diffusion_pytorch_model.safetensors.index.json
│ └── README.md
└── HunyuanVideo
├── config.json
├── model_index.json
├── README.md
├── scheduler/
│ └── scheduler_config.json
├── text_encoder/
│ ├── config.json
│ ├── model-00001-of-00004.safetensors
│ ├── model-00002-of-00004.safetensors
│ ├── model-00003-of-00004.safetensors
│ ├── model-00004-of-00004.safetensors
│ └── model.safetensors.index.json
├── text_encoder_2/
│ ├── config.json
│ └── model.safetensors
├── tokenizer/
│ ├── special_tokens_map.json
│ ├── tokenizer_config.json
│ └── tokenizer.json
├── tokenizer_2/
│ ├── merges.txt
│ ├── special_tokens_map.json
│ ├── tokenizer_config.json
│ └── vocab.json
└── vae/
├── config.json
└── diffusion_pytorch_model.safetensors
```

---

## Parameter Guide


Parameter Diagram

- **padding_mode**
- Still experimental—use `optimized` for now
- **end_condition_strength** & **enable_feature_fusion**
- Mutually exclusive; pick only one
- Lower `end_condition_strength` grants more freedom but reduces end-frame similarity
- **history_weight**
- Controls history influence, default 100%
- **history_decay**
- Linearly decays history weight; increase `decay` if you need more variation

---
## Examples
![TTP_FramePack_Start_End_Image_example](https://github.com/user-attachments/assets/0468ea9c-a5fe-4067-9ef6-6e89b6d58754)

---
> Feel free to share feedback or suggestions in Issues or PRs!

## **Star History**




Star History Chart