Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/IemProg/MiMi
🔥 🔥 [WACV2024] Mini but Mighty: Finetuning ViTs with Mini Adapters
https://github.com/IemProg/MiMi
Last synced: 7 days ago
JSON representation
🔥 🔥 [WACV2024] Mini but Mighty: Finetuning ViTs with Mini Adapters
- Host: GitHub
- URL: https://github.com/IemProg/MiMi
- Owner: IemProg
- Created: 2023-10-25T12:42:26.000Z (about 1 year ago)
- Default Branch: master
- Last Pushed: 2024-07-05T11:57:25.000Z (4 months ago)
- Last Synced: 2024-08-02T15:26:01.772Z (3 months ago)
- Language: Python
- Homepage:
- Size: 327 KB
- Stars: 18
- Watchers: 4
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Mini but Mighty: Finetuning ViTs with Mini Adapters (WACV2024)
1Telecom-Paris, Institut Polytechnique de Paris The code repository for "[Mini but Mighty: Finetuning ViTs with Mini Adapters (WACV2024)]([https://openaccess.thecvf.com/content/WACV2024/html/Marouf_Mini_but_Mighty_Finetuning_ViTs_With_Mini_Adapters_WACV_2024_paper.html](https://openaccess.thecvf.com/content/WACV2024/html/Marouf_Mini_but_Mighty_Finetuning_ViTs_With_Mini_Adapters_WACV_2024_paper.html))" in PyTorch.
📣 Published as a conference paper at WACV 2024
## Abstract
Vision Transformers (ViTs) have become one of the dom-
inant architectures in computer vision, and pre-trained ViT
models are commonly adapted to new tasks via finetuning.
Recent works proposed several parameter-efficient transfer
learning methods, such as adapters, to avoid the prohibitive
training and storage cost of finetuning.
In this work, we observe that adapters perform poorly
when the dimension of adapters is small, and we pro-
pose MiMi, a training framework that addresses this is-
sue. We start with large adapters which can reach high
performance, and iteratively reduce their size. To en-
able automatic estimation of the hidden dimension of ev-
ery adapter, we also introduce a new scoring function,
specifically designed for adapters, that compares neuron
importance across layers. Our method outperforms ex-
isting methods in finding the best trade-off between accu-
racy and trained parameters across the three dataset bench-
marks DomainNet, VTAB, and Multi-task, for a total of 29
datasets.
   Â
## Requirements
### Environment
install the conda environment using the provided yml file.## Running scripts
Please follow the settings in the `exps` folder to prepare your json files, and then run:```
python main.py --config $CONFIG_FILE
python few_shot_prune.py --config $CONFIG_FILE
```
## CitationIf you find this work helpful, please cite our paper.
```bibtex
@InProceedings{Marouf_2024_WACV,
author = {Marouf, Imad Eddine and Tartaglione, Enzo and Lathuili\`ere, St\'ephane},
title = {Mini but Mighty: Finetuning ViTs With Mini Adapters},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {January},
year = {2024},
pages = {1732-1741}
}