Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/KD-TAO/DyCoke
DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Models
https://github.com/KD-TAO/DyCoke
Last synced: 13 days ago
JSON representation
DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Models
- Host: GitHub
- URL: https://github.com/KD-TAO/DyCoke
- Owner: KD-TAO
- License: apache-2.0
- Created: 2024-11-22T04:00:33.000Z (2 months ago)
- Default Branch: main
- Last Pushed: 2025-01-02T17:43:43.000Z (25 days ago)
- Last Synced: 2025-01-02T18:37:23.726Z (25 days ago)
- Language: Python
- Size: 31.7 MB
- Stars: 11
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-token-merge-for-mllms - [Code
- awesome-token-merge-for-mllms - [Code
README
# DyCoke : **Dynamic Compression of Tokens for Fast Video Large Language Models**
[Keda Tao](), [Can Qin](https://canqin.tech/), [Haoxuan You](https://hxyou.github.io/), [Yang Sui](https://eclipsess.github.io/yangsui.github.io/), [Huan Wang](https://huanwang.tech/), "DyCoke 🥤Dynamic Compression of Tokens for Fast Video Large Language Models"
[[Paper](https://arxiv.org/abs/2411.15024)]
### Demo
![video](figures/video-ezgif.com-resize-2.gif)
#### 🔥🔥🔥 News- **2024-11-22:** This repo is released.
- **2024-11-25**: **The paper is released.**![overview](figures/overview.png)
> **Abstract:** Video large language models (VLLMs) have significantly advanced recently in processing complex video content, yet their inference efficiency remains constrained because of the high computational cost stemming from the thousands of visual tokens generated from the video inputs. We empirically observe that, unlike single image inputs, VLLMs typically attend visual tokens from different frames at different decoding iterations, making a one-shot pruning strategy prone to removing important tokens by mistake. Motivated by this, we present DyCoke, a training-free token compression method to optimize token representation and accelerate VLLMs. DyCoke incorporates a plug-and-play temporal compression module to minimize temporal redundancy by merging redundant tokens across frames, and applies dynamic KV cache reduction to prune spatially redundant tokens selectively. It ensures high-quality inference by dynamically retaining the critical tokens at each decoding step. Extensive experimental results demonstrate that DyCoke can outperform the prior SoTA counterparts, achieving 1.5X inference speedup, 1.4X memory reduction against the baseline VLLM, while still improving the performance, with no training.
## ⚒️ TODO
* [x] Release Paper
* [x] Release code
* [ ] Support more models## Install
##### 1. **Clone this repository and navigate to the LLaVA folder:**
```bash
git clone https://github.com/KD-TAO/DyCoke.git
cd DyCoke
```##### 2. **Install the inference package:**
```bash
conda create -n dycoke python=3.10 -y
conda activate dycoke
pip install --upgrade pip # Enable PEP 660 support.
pip install -e ".[train]"
pip install lmms-eval
```## Evaluation
#### Set the DyCoke parameters
- We use the [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval) toolkit to evaluate our models. It's worth noting that you can specify DyCoke Settings via parameters, such as:
```bash
...
--model_args pretrained=lmms-lab/llava-onevision-qwen2-7b-ov,conv_template=qwen_1_5,model_name=llava_qwen,dycoke=True,dycoke_l=3,dycoke_p=0.7,dycoke_k=0.7 \
...
```
- Our main baseline model is [LLaVA-OV](https://github.com/LLaVA-VL/LLaVA-NeXT/tree/main), if you want to switch between different model frameworks, please change the following parameters:
```bash
...
--model_args pretrained=lmms-lab/llava-onevision-qwen2-0.5b-ov,conv_template=qwen_1_5,model_name=llava_qwen,dycoke=True,dycoke_num_image_per_frame=$YOUR_NUM,image_token_start_index=$YOUR_IDX \
...
```
##### 1. Test on the specified task:
```bash
accelerate launch --num_processes=8 \
-m lmms_eval \
--model llava_onevision \
--model_args pretrained=lmms-lab/llava-onevision-qwen2-7b-ov,conv_template=qwen_1_5,model_name=llava_qwen,dycoke=True \
--tasks $YOUR-TASKS \ # Such as "activitynetqa,video_dc499,perceptiontest_val_mc,videomme_w_subtitle,videomme,nextqa_mc_test..."
--batch_size 1 \
--log_samples \
--log_samples_suffix llava_onevision \
--output_path ./logs/
```
##### 2. **Reproduce the results**:
```bash
bash eval.sh
```
##### 3. **Test on the LLaVA-OV-72B**:
```bash
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7,8 accelerate launch --num_processes=1 \
-m lmms_eval \
--model llava_onevision \
--model_args pretrained=lmms-lab/llava-onevision-qwen2-7b-ov,conv_template=qwen_1_5,model_name=llava_qwen,dycoke=True,device_map=auto \
--tasks $YOUR-TASKS \ # Such as "activitynetqa,video_dc499,perceptiontest_val_mc,videomme_w_subtitle,videomme,nextqa_mc_test..."
--batch_size 1 \
--log_samples \
--log_samples_suffix llava_onevision \
--output_path ./logs/
```## 👀 Results on Video-Language Models
![overview](figures/table.png)
![overview](figures/case.png)
## AcknowledgementThis project is based on [LLavVA-NeXT](https://github.com/LLaVA-VL/LLaVA-NeXT). Thanks for their awesome work.
## Contact
If you have any questions, please feel free to contact with me at [email protected]
## Citation
If you find this work useful for your research, please consider citing our paper:
```bibtex
@article{tao2024dycoke,
title={DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Models},
author={Tao, Keda and Qin, Can and You, Haoxuan and Sui, Yang and Wang, Huan},
journal={arXiv preprint arXiv:2411.15024},
year={2024}
}
```