Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/salesforce/BLIP
PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
https://github.com/salesforce/BLIP
image-captioning image-text-retrieval vision-and-language-pre-training vision-language vision-language-transformer visual-question-answering visual-reasoning
Last synced: 3 months ago
JSON representation
PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
- Host: GitHub
- URL: https://github.com/salesforce/BLIP
- Owner: salesforce
- License: bsd-3-clause
- Created: 2022-01-25T01:19:25.000Z (almost 3 years ago)
- Default Branch: main
- Last Pushed: 2024-08-05T12:45:49.000Z (5 months ago)
- Last Synced: 2024-10-22T22:20:27.877Z (3 months ago)
- Topics: image-captioning, image-text-retrieval, vision-and-language-pre-training, vision-language, vision-language-transformer, visual-question-answering, visual-reasoning
- Language: Jupyter Notebook
- Homepage:
- Size: 6.34 MB
- Stars: 4,752
- Watchers: 34
- Forks: 628
- Open Issues: 125
-
Metadata Files:
- Readme: README.md
- License: LICENSE.txt
- Code of conduct: CODE_OF_CONDUCT.md
- Codeowners: CODEOWNERS
- Security: SECURITY.md
Awesome Lists containing this project
- Awesome-Reasoning-Foundation-Models - [Code
- awesome-multi-modal - https://github.com/salesforce/BLIP
- awesome-multi-modal - this https URL
- StarryDivineSky - salesforce/BLIP
- awesome-technostructure - salesforce/BLIP - Image Pre-training for Unified Vision-Language Understanding and Generatio ([:robot: machine-learning]([robot-machine-learning)](<https://github.com/stars/ketsapiwiq/lists/robot-machine-learning>)))
- awesome-technostructure - salesforce/BLIP - Image Pre-training for Unified Vision-Language Understanding and Generatio ([:robot: machine-learning]([robot-machine-learning)](<https://github.com/stars/ketsapiwiq/lists/robot-machine-learning>)))
README
## BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
## Announcement: BLIP is now officially integrated into [LAVIS](https://github.com/salesforce/LAVIS) - a one-stop library for language-and-vision research and applications!
This is the PyTorch code of the BLIP paper [[blog](https://blog.salesforceairesearch.com/blip-bootstrapping-language-image-pretraining/)]. The code has been tested on PyTorch 1.10.
pip install -r requirements.txt
To install the dependencies, runCatalog:
- [x] Inference demo
- [x] Pre-trained and finetuned checkpoints
- [x] Finetuning code for Image-Text Retrieval, Image Captioning, VQA, and NLVR2
- [x] Pre-training code
- [x] Zero-shot video-text retrieval
- [x] Download of bootstrapped pre-training datasets### Inference demo:
Run our interactive demo using [Colab notebook](https://colab.research.google.com/github/salesforce/BLIP/blob/main/demo.ipynb) (no GPU needed).
The demo includes code for:
1. Image captioning
2. Open-ended visual question answering
3. Multimodal / unimodal feature extraction
4. Image-text matchingTry out the [Web demo](https://huggingface.co/spaces/Salesforce/BLIP), integrated into [Huggingface Spaces 🤗](https://huggingface.co/spaces) using [Gradio](https://github.com/gradio-app/gradio).
Replicate web demo and Docker image is also available at [![Replicate](https://replicate.com/salesforce/blip/badge)](https://replicate.com/salesforce/blip)
### Pre-trained checkpoints:
Num. pre-train images | BLIP w/ ViT-B | BLIP w/ ViT-B and CapFilt-L | BLIP w/ ViT-L
--- | :---: | :---: | :---:
14M | Download| - | -
129M | Download| Download | Download### Finetuned checkpoints:
Task | BLIP w/ ViT-B | BLIP w/ ViT-B and CapFilt-L | BLIP w/ ViT-L
--- | :---: | :---: | :---:
Image-Text Retrieval (COCO) | Download| - | Download
Image-Text Retrieval (Flickr30k) | Download| - | Download
Image Captioning (COCO) | - | Download| Download |
VQA | Download| Download | -
NLVR2 | Download| - | -### Image-Text Retrieval:
1. Download COCO and Flickr30k datasets from the original websites, and set 'image_root' in configs/retrieval_{dataset}.yaml accordingly.
2. To evaluate the finetuned BLIP model on COCO, run:python -m torch.distributed.run --nproc_per_node=8 train_retrieval.py \
--config ./configs/retrieval_coco.yaml \
--output_dir output/retrieval_coco \
--evaluate
3. To finetune the pre-trained checkpoint using 8 A100 GPUs, first set 'pretrained' in configs/retrieval_coco.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base.pth". Then run:python -m torch.distributed.run --nproc_per_node=8 train_retrieval.py \
--config ./configs/retrieval_coco.yaml \
--output_dir output/retrieval_coco### Image-Text Captioning:
1. Download COCO and NoCaps datasets from the original websites, and set 'image_root' in configs/caption_coco.yaml and configs/nocaps.yaml accordingly.
2. To evaluate the finetuned BLIP model on COCO, run:python -m torch.distributed.run --nproc_per_node=8 train_caption.py --evaluate
3. To evaluate the finetuned BLIP model on NoCaps, generate results with: (evaluation needs to be performed on official server)python -m torch.distributed.run --nproc_per_node=8 eval_nocaps.py
4. To finetune the pre-trained checkpoint using 8 A100 GPUs, first set 'pretrained' in configs/caption_coco.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_capfilt_large.pth". Then run:python -m torch.distributed.run --nproc_per_node=8 train_caption.py### VQA:
1. Download VQA v2 dataset and Visual Genome dataset from the original websites, and set 'vqa_root' and 'vg_root' in configs/vqa.yaml.
2. To evaluate the finetuned BLIP model, generate results with: (evaluation needs to be performed on official server)python -m torch.distributed.run --nproc_per_node=8 train_vqa.py --evaluate
3. To finetune the pre-trained checkpoint using 16 A100 GPUs, first set 'pretrained' in configs/vqa.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_capfilt_large.pth". Then run:python -m torch.distributed.run --nproc_per_node=16 train_vqa.py### NLVR2:
1. Download NLVR2 dataset from the original websites, and set 'image_root' in configs/nlvr.yaml.
2. To evaluate the finetuned BLIP model, runpython -m torch.distributed.run --nproc_per_node=8 train_nlvr.py --evaluate
3. To finetune the pre-trained checkpoint using 16 A100 GPUs, first set 'pretrained' in configs/nlvr.yaml as "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base.pth". Then run:python -m torch.distributed.run --nproc_per_node=16 train_nlvr.py### Finetune with ViT-L:
In order to finetune a model with ViT-L, simply change the config file to set 'vit' as large. Batch size and learning rate may also need to be adjusted accordingly (please see the paper's appendix for hyper-parameter details). Gradient checkpoint can also be activated in the config file to reduce GPU memory usage.### Pre-train:
1. Prepare training json files where each json file contains a list. Each item in the list is a dictonary with two key-value pairs: {'image': path_of_image, 'caption': text_of_image}.
2. In configs/pretrain.yaml, set 'train_file' as the paths for the json files .
3. Pre-train the model using 8 A100 GPUs:python -m torch.distributed.run --nproc_per_node=8 pretrain.py --config ./configs/Pretrain.yaml --output_dir output/Pretrain### Zero-shot video-text retrieval:
1. Download MSRVTT dataset following the instructions from https://github.com/salesforce/ALPRO, and set 'video_root' accordingly in configs/retrieval_msrvtt.yaml.
2. Install [decord](https://github.com/dmlc/decord) withpip install decord
3. To perform zero-shot evaluation, runpython -m torch.distributed.run --nproc_per_node=8 eval_retrieval_video.py### Pre-training datasets download:
We provide bootstrapped pre-training datasets as json files. Each json file contains a list. Each item in the list is a dictonary with two key-value pairs: {'url': url_of_image, 'caption': text_of_image}.Image source | Filtered web caption | Filtered synthetic caption by ViT-B | Filtered synthetic caption by ViT-L
--- | :---: | :---: | :---:
CC3M+CC12M+SBU | Download| Download| Download
LAION115M | Download| Download| Download### Citation
If you find this code to be useful for your research, please consider citing.
@inproceedings{li2022blip,
title={BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation},
author={Junnan Li and Dongxu Li and Caiming Xiong and Steven Hoi},
year={2022},
booktitle={ICML},
}### Acknowledgement
The implementation of BLIP relies on resources from ALBEF, Huggingface Transformers, and timm. We thank the original authors for their open-sourcing.