Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/yxuansu/PandaGPT
[TLLM'23] PandaGPT: One Model To Instruction-Follow Them All
https://github.com/yxuansu/PandaGPT
Last synced: 15 days ago
JSON representation
[TLLM'23] PandaGPT: One Model To Instruction-Follow Them All
- Host: GitHub
- URL: https://github.com/yxuansu/PandaGPT
- Owner: yxuansu
- License: apache-2.0
- Created: 2023-05-18T21:01:24.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2023-06-01T19:39:49.000Z (over 1 year ago)
- Last Synced: 2024-08-02T01:25:48.871Z (3 months ago)
- Language: Python
- Homepage: https://panda-gpt.github.io/
- Size: 21.3 MB
- Stars: 741
- Watchers: 11
- Forks: 58
- Open Issues: 21
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- StarryDivineSky - yxuansu/PandaGPT
README
# PandaGPT: One Model To Instruction-Follow Them All
![Data License](https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg)
![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)
![Model Weight License](https://img.shields.io/badge/Model_Weight%20License-CC%20By%20NC%204.0-red.svg)
![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)
π Project Page β’ π€ Online Demo β’ π€ Online Demo-2 (Runs fast for users from mainland China) β’ π Paper β’ β¬ Data β’ π€ Model β’ πΉ Video**Team:** [Yixuan Su](https://yxuansu.github.io/)\*, [Tian Lan](https://github.com/gmftbyGMFTBY)\*, [Huayang Li](https://sites.google.com/view/huayangli)\*, Jialu Xu, Yan Wang, and [Deng Cai](https://jcyk.github.io/)\* (Major contributors\*)
****
## Online Demo Demonstration:
Below, we demonstrate some examples of our online [demo](https://huggingface.co/spaces/GMFTBY/PandaGPT). For more generated examples of PandaGPT, please refer to our [webpage](https://panda-gpt.github.io/) or our [paper](https://github.com/yxuansu/PandaGPT/blob/main/PandaGPT.pdf).
(1) In this example, PandaGPT takes an input image and reasons over the user's input.
(2) In this example, PandaGPT takes the joint input from two modalities, i.e. (1) an image π of car and (2) an audioπ of thunderstorm.
****
## Catalogue:
* 1. Introduction
* 2. Running PandaGPT Demo
* 2.1. Environment Installation
* 2.2. Prepare ImageBind Checkpoint
* 2.3. Prepare Vicuna Checkpoint
* 2.4. Prepare Delta Weights of PandaGPT
* 2.5. Deploying Demo
* 3. Train Your Own PandaGPT
* 3.1. Data Preparation
* 3.2. Training Configurations
* 3.3. Training PandaGPT
* Usage and License Notices
* Citation
* Acknowledgments****
### 1. Introduction: [Back to Top]
**License** The icons in the image are taken from [this website](https://www.flaticon.com).
PandaGPT is the first foundation model capable of instruction-following data across six modalities, without the need of explicit supervision. It demonstrates a diverse set of multimodal capabilities such as complex understanding/reasoning, knowledge-grounded description, and multi-turn conversation.
PandaGPT is a general-purpose instruction-following model that can both see π and hearπ. Our pilot experiments show that PandaGPT can perform complex tasks such as detailed image description generation, writing stories inspired by videos, and answering questions about audios. More Interestingly, PandaGPT can take multimodal inputs simultaneously and compose their semantics naturally. For example, PandaGPT can connect how objects look in a photo and how they sound in an audio.
****
### 2. Running PandaGPT Demo: [Back to Top]
#### 2.1. Environment Installation:
To install the required environment, please run
```
pip install -r requirements.txt
```Then install the Pytorch package with the correct cuda version, for example
```
pip install torch==1.13.1+cu117 -f https://download.pytorch.org/whl/torch/
```
#### 2.2. Prepare ImageBind Checkpoint:
You can download the pre-trained ImageBind model using [this link](https://dl.fbaipublicfiles.com/imagebind/imagebind_huge.pth). After downloading, put the downloaded file (imagebind_huge.pth) in [[./pretrained_ckpt/imagebind_ckpt/]](./pretrained_ckpt/imagebind_ckpt/) directory.
#### 2.3. Prepare Vicuna Checkpoint:
To prepare the pre-trained Vicuna model, please follow the instructions provided [[here]](./pretrained_ckpt#1-prepare-vicuna-checkpoint).
#### 2.4. Prepare Delta Weights of PandaGPT:
|**Base Language Model**|**Maximum Sequence Length**|**Huggingface Delta Weights Address**|
|:-------------:|:-------------:|:-------------:|
|Vicuna-7B (version 0)|512|[openllmplayground/pandagpt_7b_max_len_512](https://huggingface.co/openllmplayground/pandagpt_7b_max_len_512)|
|Vicuna-7B (version 0)|1024|[openllmplayground/pandagpt_7b_max_len_1024](https://huggingface.co/openllmplayground/pandagpt_7b_max_len_1024)|
|Vicuna-13B (version 0)|256|[openllmplayground/pandagpt_13b_max_len_256](https://huggingface.co/openllmplayground/pandagpt_13b_max_len_256)|
|Vicuna-13B (version 0)|400|[openllmplayground/pandagpt_13b_max_len_400](https://huggingface.co/openllmplayground/pandagpt_13b_max_len_400)|We release the delta weights of PandaGPT trained with different strategies in the table above. After downloading, put the downloaded 7B/13B delta weights file (pytorch_model.pt) in the [./pretrained_ckpt/pandagpt_ckpt/7b/](./pretrained_ckpt/pandagpt_ckpt/7b/) or [./pretrained_ckpt/pandagpt_ckpt/13b/](./pretrained_ckpt/pandagpt_ckpt/13b/) directory. In our [online demo](https://huggingface.co/spaces/GMFTBY/PandaGPT), we use the `openllmplayground/pandagpt_7b_max_len_1024` as our default model due to the limitation of computation resource. Better results are expected if switching to `openllmplayground/pandagpt_13b_max_len_400`.
#### 2.5. Deploying Demo:
Upon completion of previous steps, you can run the demo locally as
```bash
cd ./code/
CUDA_VISIBLE_DEVICES=0 python web_demo.py
```If you running into `sample_rate` problem, please git install `pytorchvideo` from the source as
```yaml
git clone https://github.com/facebookresearch/pytorchvideo
cd pytorchvideo
pip install --editable ./
```****
### 3. Train Your Own PandaGPT: [Back to Top]
**Prerequisites:** Before training the model, making sure the environment is properly installed and the checkpoints of ImageBind and Vicuna are downloaded. You can refer to [here](https://github.com/yxuansu/PandaGPT#2-running-pandagpt-demo-back-to-top) for more information.
#### 3.1. Data Preparation:
**Declaimer:** To ensure the reproducibility of our results, we have released our training dataset. The dataset must be used for research purpose only. The use of the dataset must comply with the licenses from original sources, i.e. LLaVA and MiniGPT-4. These datasets may be taken down when requested by the original authors.
|**Training Task**|**Dataset Address**|
|:-------------:|:-------------:|
|Visual Instruction-Following|[openllmplayground/pandagpt_visual_instruction_dataset](https://huggingface.co/datasets/openllmplayground/pandagpt_visual_instruction_dataset)|After downloading, put the downloaded file and unzip them under the [./data/](./data/) directory.
> **** The directory should look like:
.
βββ ./data/
βββ pandagpt4_visual_instruction_data.json
βββ /images/
βββ 000000426538.jpg
βββ 000000306060.jpg
βββ ...
#### 3.2 Training Configurations:
The table below show the training hyperparameters used in our experiments. The hyperparameters are selected based on the constrain of our computational resources, i.e. 8 x A100 (40G) GPUs.
|**Base Language Model**|**Training Task**|**Epoch Number**|**Batch Size**|**Learning Rate**|**Maximum Length**|
|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|
|7B|Visual Instruction|2|64|5e-4|1024|
|13B|Visual Instruction|2|64|5e-4|400|
#### 3.3. Training PandaGPT:
To train PandaGPT, please run the following commands:
```yaml
cd ./code/scripts/
chmod +x train.sh
cd ..
./scripts/train.sh
```The key arguments of the training script are as follows:
* `--data_path`: The data path for the json file `pandagpt4_visual_instruction_data.json`.
* `--image_root_path`: The root path for the downloaded images.
* `--imagebind_ckpt_path`: The path where saves the ImageBind checkpoint `imagebind_huge.pth`.
* `--vicuna_ckpt_path`: The directory that saves the pre-trained Vicuna checkpoints.
* `--max_tgt_len`: The maximum sequence length of training instances.
* `--save_path`: The directory which saves the trained delta weights. This directory will be automatically created.Note that the epoch number can be set in the `epochs` argument at [./code/config/openllama_peft.yaml](./code/config/openllama_peft.yaml) file. The `train_micro_batch_size_per_gpu` and `gradient_accumulation_steps` arguments in [./code/dsconfig/openllama_peft_stage_1.json](./code/dsconfig/openllama_peft_stage_1.json) should be set as `2` and `4` for 7B model, and set as `1` and `8` for 13B model.
****
### Usage and License Notices:
PandaGPT is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. The delta weights are also CC BY NC 4.0 (allowing only non-commercial use).
****
### Citation:
If you found PandaGPT useful in your research or applications, please kindly cite using the following BibTeX:
```
@article{su2023pandagpt,
title={PandaGPT: One Model To Instruction-Follow Them All},
author={Su, Yixuan and Lan, Tian and Li, Huayang and Xu, Jialu and Wang, Yan and Cai, Deng},
journal={arXiv preprint arXiv:2305.16355},
year={2023}
}
```****
### Acknowledgments:
This repo benefits from [OpenAlpaca](https://github.com/yxuansu/OpenAlpaca), [ImageBind](https://github.com/facebookresearch/ImageBind), [LLaVA](https://github.com/haotian-liu/LLaVA), and [MiniGPT-4](https://github.com/Vision-CAIR/MiniGPT-4). Thanks for their wonderful works!