Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/jytmelon/G-Prune
https://github.com/jytmelon/G-Prune
Last synced: 13 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/jytmelon/G-Prune
- Owner: jytmelon
- Created: 2024-12-13T03:53:06.000Z (about 2 months ago)
- Default Branch: main
- Last Pushed: 2025-01-14T09:56:41.000Z (13 days ago)
- Last Synced: 2025-01-14T10:44:24.694Z (13 days ago)
- Language: Python
- Size: 13.9 MB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-token-merge-for-mllms - [Code
- awesome-token-merge-for-mllms - [Code
README
# What Kind of Visual Tokens Do We Need? Training-Free Visual Token Pruning for Multi-Modal Large Language Models from the Perspective of Graph
[[paper](https://arxiv.org/abs/2501.02268)]## TL
We present **G-Prune**, a distinctive token pruning framework for multimodal large language models (MLLMs) that addresses visual token redundancy through graph-based similarity modeling. By building a similarity graph among visual tokens, GraphPrune leverages information flow to identify and retain the most representative tokens. This method strikes a harmonious balance between maintaining model performance and reducing computational cost.# LLaVA-NeXT Setup and Evaluation
This README provides step-by-step instructions to set up the environment and run evaluations for the G-Prune project.
## News
**[2024/12/10]** Our paper **G-Prune** has been accepted to **AAAI 2025**! 🎉
**[2024/12/15]** Inference acceleration code for **LLaVA-NeXT** is now released!## Demos
Here are some example results showcasing the visualization of information flow with varying iterations on LLaVA-NeXT:
Here are some example results showcasing the visualization of pruning outcomes with varying pruning rates on LLaVA-NeXT:
## Environment Setup
Follow these steps to set up the environment:
```bash
cd LLaVA-NeXT
conda create -n gprune-next python=3.10 -y
conda activate gprune-next
pip install --upgrade pip # Enable PEP 660 support.
pip install -e ".[train]"
pip install lmms-eval
```## Running Evaluations
To run the evaluations, execute the following script:
```bash
bash /scripts/eval_lmms_eval.sh
```## Modifying Retention Rate
To adjust the retention rate or related parameters, modify line 239 in the following file:
```
/llava/model/llava_arch.py
```## Citation
If you find FitPrune useful, please kindly cite our paper. Thank you!
```bibtex
@article{jiang2025kind,
title={What Kind of Visual Tokens Do We Need? Training-free Visual Token Pruning for Multi-modal Large Language Models from the Perspective of Graph},
author={Jiang, Yutao and Wu, Qiong and Lin, Wenhao and Yu, Wei and Zhou, Yiyi},
journal={arXiv preprint arXiv:2501.02268},
year={2025}
}
```