https://github.com/wisconsinaivision/yollava
ππ΅π» Yo'LLaVA: Your Personalized Language and Vision Assistant
https://github.com/wisconsinaivision/yollava
llava llm llms lmm lmms multi-modal-models neurips neurips2024 personalization personalized
Last synced: 7 months ago
JSON representation
ππ΅π» Yo'LLaVA: Your Personalized Language and Vision Assistant
- Host: GitHub
- URL: https://github.com/wisconsinaivision/yollava
- Owner: WisconsinAIVision
- Created: 2024-06-13T18:44:46.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2025-03-26T19:43:26.000Z (8 months ago)
- Last Synced: 2025-04-02T10:38:01.123Z (8 months ago)
- Topics: llava, llm, llms, lmm, lmms, multi-modal-models, neurips, neurips2024, personalization, personalized
- Language: Python
- Homepage: https://thaoshibe.github.io/YoLLaVA/
- Size: 8.64 MB
- Stars: 87
- Watchers: 2
- Forks: 7
- Open Issues: 5
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# ππ΅π» Yo'LLaVA: Your Personalized LLaVA (NeurIPS 2024)
### [arXiv](https://arxiv.org/abs/2406.09400) | [BibTeX](#BibTeX) | [Project Page](https://thaoshibe.github.io/YoLLaVA/) | [Poster](https://neurips.cc/media/PosterPDFs/NeurIPS%202024/93737.png?t=1729115312.34047) | [HuggingFace Datasets](https://huggingface.co/datasets/thaoshibe/YoLLaVA) |

β.γ.:*γ»Β°β.γ.:*γ»Β°
[ππ΅π» **Yo'LLaVA: Your Personalized Language and Vision Assistant**](https://thaoshibe.github.io/YoLLaVA/) (NeurIPS 2024)
[Thao Nguyen β¨](https://thaoshibe.github.io/), [Haotian Liu](https://hliu.cc/), [Mu Cai](https://pages.cs.wisc.edu/~mucai/), [Yuheng Li](https://yuheng-li.github.io/), [Utkarsh Ojha](https://utkarshojha.github.io/), [Yong Jae Lee](https://pages.cs.wisc.edu/~yongjaelee/)
𦑠University of Wisconsin-Madison
|  |
|:--:|
| Given just a few images of a novel subject (e.g., a dog named ``, a person named ``), YoβLLaVA learns to facilitate textual/visual conversations centered around that subject. |
β.γ.:*γ»Β°β.γ.:*γ»Β°
> **Abstract**: Large Multimodal Models (LMMs) have shown remarkable capabilities across a variety of tasks (e.g., image captioning, visual question answering). While broad, their knowledge remains generic (e.g., recognizing a dog), and they are unable to handle personalized subjects (e.g., recognizing a user's pet dog). Human reasoning, in contrast, typically operates within the context of specific subjects in our surroundings. For example, one might ask, "What should I buy for my dog's birthday?"; as opposed to a generic inquiry about "What should I buy for a dog's birthday?". Similarly, when looking at a friend's image, the interest lies in seeing their activities (e.g., "my friend is holding a cat"), rather than merely observing generic human actions (e.g., "a man is holding a cat"). In this paper, we introduce the novel task of personalizing LMMs, so that they can have conversations about a specific subject. We propose Yo'LLaVA, which learns to embed a personalized subject into a set of latent tokens given a handful of example images of the subject. Our qualitative and quantitative analyses reveal that Yo'LLaVA can learn the concept more efficiently using fewer tokens and more effectively encode the visual attributes compared to strong prompting baselines (e.g., LLaVA).
### Training/ Testing
**Installation**: This code is directly built on top of [LLaVA](https://github.com/haotian-liu/LLaVA?tab=readme-ov-file#install). Please follow [LLaVA's installation](https://github.com/haotian-liu/LLaVA?tab=readme-ov-file#install)!
π§ Note: This code is under construction π§ -- While the base code is available, I have NOT tested the code and optimize the code yet -- Please check back for updates!
```
python train-multi-token.py --name bo \
--exp_name final5 --prefix_token 16 --epoch 15 \
--model_path ./llava_ckpts/llava_ckpt \
--data_root ./yollava-data/train \
--user_prompt --recog_only --text_only --random_image
```
or run `bash train.sh`
To test, plesase refer to `test-sks.py` and `test-sks-qa.py`.
### Yo'LLaVA Dataset

> View dataset on HuggingFace Datasets: https://huggingface.co/datasets/thaoshibe/YoLLaVA
To download the dataset, please intall Git Large File Storage (LFS) and clone the repository.
The dataset is in [`yollava-data`](https://github.com/WisconsinAIVision/YoLLaVA/tree/main/yollava-data) folder
```
git lfs install
git clone https://github.com/WisconsinAIVision/YoLLaVA.git
```
The simple Visual Question Answering json file is located in `yollava-visual-qa.json` with the following format:
```
{
"./yollava-data/test/bo/0.png":
{
"question": "What is the primary color of 's fur?",
"options":
{
"A": "Brown",
"B": "Grey"
},
"correct_answer": "A"
}
}
```
##### Retrieved Negative Examples
For your convenience, retrieved negative examples are provided in this [Google Drive](https://drive.google.com/drive/folders/1bqM5y0-Kw26R5T4kfaeUAZzeKqIdREdU?usp=sharing).
Please not that these images are retrieved from [LAION-2B with CLIP](https://github.com/rom1504/clip-retrieval/tree/main); and we do **NOT** own the rights to these images, and these images are **purely for research purposes**.

Please download the `yollava-data.zip` in [Google Drive](https://drive.google.com/drive/folders/1bqM5y0-Kw26R5T4kfaeUAZzeKqIdREdU?usp=sharing) and unzip it.
In the folder, you can also find the json file with CLIP similarity scores. Folder structure:
```
yollava-data
βββ train
β βββ bo
β β βββ 0.png
β β βββ 1.png
β β βββ ...
β β βββ negative_example
β β βββ 76618997f6ce14d73ccde567a6c8eabb.png
β β βββ eca8f558d3c4423351f45e87fb8ee5f9.png
β β βββ ...
β β βββ scores.json
βββ test
β βββ bo
β β βββ 0.png
β β βββ 1.png
β β βββ ...
```
The json file has the following format:
```
{
"image": "51df89957cb840afa91b37db9669fd1b",
"image_path": "/mnt/localssd/code/data/yollava-data/train/bo/negative_example/51df89957cb840afa91b37db9669fd1b.png",
"clip_score": 0.6376656889915466
}
```
##### Some pretrained concepts
We also provide some pretrained concepts in this [Google Drive](https://drive.google.com/drive/folders/1bqM5y0-Kw26R5T4kfaeUAZzeKqIdREdU?usp=sharing). This pretrained concepts library includes:
| 
`` | 
`` | 
`` | 
`` | 
`` |
| --- | --- | --- | --- | --- |
| 
`` | 
`` | 
`` | 
`` | 
`` |
| 
`` | 
`` | 
`` | 
`` | 
`` |
The `best.pt` is the checkpoint that have higest recognition accuracy in the train set. Other checkpoints are also provided in the folder.
### BibTeX
```
@inproceedings{yollava,
author = {Nguyen, Thao and Liu, Haotian and Li, Yuheng and Cai, Mu and Ojha, Utkarsh and Lee, Yong Jae},
booktitle = {Advances in Neural Information Processing Systems},
title = {Yo\textquotesingle LLaVA: Your Personalized Language and Vision Assistant},
year = {2024}
}
```
### Acknowledgement
This code is heavily borrowed from:
- Awesome [LLaVA](https://github.com/haotian-liu/LLaVA)!
- [Textual Inversion](https://github.com/huggingface/diffusers/blob/main/examples/textual_inversion/textual_inversion.py)
Thank you (.β α΄ β.)!