https://github.com/opengvlab/tpo
Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment
https://github.com/opengvlab/tpo
Last synced: about 2 months ago
JSON representation
Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment
- Host: GitHub
- URL: https://github.com/opengvlab/tpo
- Owner: OpenGVLab
- Created: 2024-12-26T07:13:32.000Z (over 1 year ago)
- Default Branch: master
- Last Pushed: 2025-07-22T07:47:48.000Z (8 months ago)
- Last Synced: 2025-11-16T16:25:02.654Z (4 months ago)
- Language: Jupyter Notebook
- Size: 11.7 MB
- Stars: 61
- Watchers: 1
- Forks: 1
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# 👫 TPO
|
| [](https://huggingface.co/OpenGVLab/VideoChat-TPO)
## 💡 Introduction
Task Preference Optimization (TPO) is a new method designed to enhance the performance of multimodal large language models (MLLMs) in handling visual tasks. Current MLLMs face challenges in precisely understanding visuals despite their capabilities in various vision applications. TPO addresses this by integrating differentiable task preferences from fine-grained visual tasks, introducing learnable task tokens to bridge the gap between task-specific heads and the MLLM. This results in improved multimodal capabilities and task-specific performance, with significant improvements demonstrated across multiple benchmarks and tasks.
Figure 1: TPO uses differentiable task preferences from dense visual supervisions via task-specific heads to enhance MLLMs in fine-grained understanding.
- Enhanced Multimodal Performance: Achieves an average **14.6%** improvement in multimodal performance compared to baseline models on various image and video tasks, and demonstrates scalability across different MLLM architectures such as [VideoChat](https://github.com/OpenGVLab/TPO?tab=readme-ov-file#-model-zoo) and LLaVA.
- Robust Zero-Shot Capabilities: Performs comparably to state-of-the-art supervised models in zero-shot scenarios across various vision tasks.
- Synergistic Training: Multi-task co-training within TPO leads to mutual benefits, enhancing individual task performance beyond single-task training.
Figure 2: Overall Pipeline of TPO. The architecture of Task Preference Optimization (TPO) consists of four main components: (1) a vision encoder, (2) a connector, (3) a large language model, and (4) a series of visual task heads. Differently colored flame symbols indicate which components are unfrozen at various stages of the training process.
## 🏃 Installation
1. Clone the repository:
```bash
git clone https://github.com/OpenGVLab/TPO.git
```
1. Navigate to the project directory:
```bash
cd TPO
```
3. Install the required dependencies:
```
pip install -r requirements.txt
```
4. Try the demo
```
python app.py
```
## 🤖 Model Zoo
| MLLM | Link | MVBench |
| --- | --- | --- |
| VideoChat-TPO| [huggingface](https://huggingface.co/OpenGVLab/VideoChat-TPO)| 66.8 |
| LlaVA-OV-TPO | TBD | 64.8 |
## Citation
```
@article{yan2024tpo,
title={Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment},
author={Yan, Ziang and Li, Zhilin and He, Yinan and Wang, Chenting and Li, Kunchang and Li, Xinhao and Zeng, Xiangyu and Wang, Zilei and Wang, Yali and Qiao, Yu, and Wang, Limin and Wang, Yi},
journal={arXiv preprint arXiv:2412.19326},
year={2024}
}
```
## Acknowledgement
TPO is built with reference to the following projects: [VideoChat](https://github.com/OpenGVLab/Ask-Anything), [Llava-OV](https://github.com/LLaVA-VL/LLaVA-NeXT), [UMT](https://github.com/LAION-AI/CLIP_benchmark), [InternVideo2](https://github.com/OpenGVLab/InternVideo), [CG-DETR](https://github.com/wjun0830/CGDETR), and [SAM2](https://github.com/facebookresearch/sam2). Thanks for their work!