https://github.com/aimagelab/dico
[BMVC'24 Oral ✨] Revisiting Image Captioning Training Paradigm via Direct CLIP-based Optimization
https://github.com/aimagelab/dico
bmvc2024 caption-generation captioning-images image-captioning vision-and-language
Last synced: 6 months ago
JSON representation
[BMVC'24 Oral ✨] Revisiting Image Captioning Training Paradigm via Direct CLIP-based Optimization
- Host: GitHub
- URL: https://github.com/aimagelab/dico
- Owner: aimagelab
- Created: 2024-07-30T10:06:51.000Z (about 1 year ago)
- Default Branch: master
- Last Pushed: 2024-09-11T12:08:12.000Z (about 1 year ago)
- Last Synced: 2025-04-08T16:35:59.340Z (6 months ago)
- Topics: bmvc2024, caption-generation, captioning-images, image-captioning, vision-and-language
- Language: Python
- Homepage:
- Size: 6.76 MB
- Stars: 17
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
![]()
Revisiting Image Captioning Training Paradigmvia Direct CLIP-based Optimization(BMVC 2024 Oral ✨)
#### [Nicholas Moratelli](https://nicholasmoratelli.github.io)\*, [Davide Caffagni](https://github.com/dcaffo98)\*, [Marcella Cornia](https://aimagelab.ing.unimore.it/imagelab/person.asp?idpersona=90), [Lorenzo Baraldi](https://www.lorenzobaraldi.com/), and [Rita Cucchiara](https://aimagelab.ing.unimore.it/imagelab/person.asp?idpersona=1)
[](https://arxiv.org/pdf/2408.14547)
![]()
This repository contains the reference code for the paper [Revisiting Image Captioning Training Paradigm via Direct CLIP-based Optimization](https://arxiv.org/pdf/2408.14547), **BMVC 2024**.
Please cite with the following BibTeX:
```
@inproceedings{moratelli2024revisiting,
title={{Revisiting Image Captioning Training Paradigm via Direct CLIP-based Optimization}},
author={Moratelli, Nicholas and Caffagni, Davide and Cornia, Marcella and Baraldi, Lorenzo and Cucchiara, Rita},
booktitle={Proceedings of the British Machine Vision Conference},
year={2024}
}
```## 📣 Latest News 📣
- **`10 September 2024`** Our paper has been selected for oral presentation at **BMVC2024**! ✨
- **`19 July 2024`** Our paper has been accepted for publication at **BMVC2024**!## Abstract
The conventional training approach for image captioning involves pre-training a network using teacher forcing and subsequent fine-tuning with Self-Critical Sequence Training to maximize hand-crafted captioning metrics. However, when attempting to optimize modern and higher-quality metrics like CLIP-Score and PAC-Score, this training method often encounters instability and fails to acquire the genuine descriptive capabilities needed to produce fluent and informative captions. In this paper, we propose a new training paradigm termed *Direct CLIP-Based Optimization* (DiCO). Our approach jointly learns and optimizes a reward model that is distilled from a learnable captioning evaluator with high human correlation. This is done by solving a weighted classification problem directly inside the captioner. At the same time, DiCO prevents divergence from the original model, ensuring that fluency is maintained. DiCO not only exhibits improved stability and enhanced quality in the generated captions but also aligns more closely with human preferences compared to existing methods, especially in modern metrics. Additionally, it maintains competitive performance in traditional metrics.
## Create the environment
```
conda create -y -n "dico" python=3.8.16
conda activate dico
pip install -r requirements.txt
```## Training
Edit the following scripts with the correct checkpoint paths.
1. **Cross-Entropy pre-training**
```
./scripts/train_xe_coco.sh
```2. **DiCO fine-tuning**
```
./scripts/train_dico_coco.sh
```We train and evaluate our models on the [COCO Karpathy splits](https://github.com/karpathy/neuraltalk2). We employ the [webdatasets](https://github.com/webdataset/webdataset) format to prepare our datasets. Every `tar` file complies with the following structure (see also [dataset.json](datasets.json)):
- Cross-entropy
```
├── webdatasets/coco-384-training-000.tar
│ └── 177828__COCO_train2014_000000379613.jpg
│ └── 177828__COCO_train2014_000000379613.txt
│ └── 549457__COCO_val2014_000000195045.jpg
│ └── 549457__COCO_val2014_000000195045.txt
│ └── ...
├── ...
└── webdatasets/coco-384-training-113.tar
└── ...
```
Every `.txt` file contains a single caption.
- DiCO: fine-tuning | validation | test
```
├── webdatasets/coco-384-training-dict-000.tar
│ └── 177828__COCO_train2014_000000379613.jpg
│ └── 177828__COCO_train2014_000000379613.json
│ └── 549457__COCO_val2014_000000195045.jpg
│ └── 549457__COCO_val2014_000000195045.json
│ └── ...
├── ...
└── webdatasets/coco-384-training-dict-022.tar
└── ...
```
Every `.json` file contains all the captions for the given image.## Inference
```
./scripts/inference_coco.sh
```## DiCO Weights
- [ViT/L-14] The checkpoint is available [here](https://drive.google.com/file/d/19vV-SJYjFKg5-8XfbCQvq8OWB2vVTvpX).Soon available also on HuggingFace Hub
## Demo
```
python demo.py --checkpoint dico-ViTL14
```