Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/vita-group/diverse-vit
[CVPR 2022] "The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy" by Tianlong Chen, Zhenyu Zhang, Yu Cheng, Ahmed Awadallah, Zhangyang Wang
https://github.com/vita-group/diverse-vit
diversity oversmoothing regularization training-techniques transformer vision-transformer
Last synced: about 1 month ago
JSON representation
[CVPR 2022] "The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy" by Tianlong Chen, Zhenyu Zhang, Yu Cheng, Ahmed Awadallah, Zhangyang Wang
- Host: GitHub
- URL: https://github.com/vita-group/diverse-vit
- Owner: VITA-Group
- License: mit
- Created: 2022-03-07T23:17:08.000Z (almost 3 years ago)
- Default Branch: main
- Last Pushed: 2022-03-09T16:27:20.000Z (almost 3 years ago)
- Last Synced: 2024-04-16T07:18:14.857Z (8 months ago)
- Topics: diversity, oversmoothing, regularization, training-techniques, transformer, vision-transformer
- Language: Python
- Homepage:
- Size: 173 KB
- Stars: 24
- Watchers: 8
- Forks: 3
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy
[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)
Codes for this paper: [CVPR 2022] [The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy]().
Tianlong Chen, Zhenyu Zhang, Yu Cheng, Ahmed Awadallah, Zhangyang Wang.
## Overview
Vision transformers (ViTs) have gained increasing popularity as they are commonly believed to own higher modeling capacity and representation flexibility, than traditional convolutional networks. However, it is questionable whether such potential has been fully unleashed in practice, as the learned ViTs often suffer from over-smoothening, yielding likely redundant models.
Recent works made preliminary attempts to identify and alleviate such redundancy, e.g., via regularizing embedding similarity or re-injecting convolution-like structures. However, a “head-to-toe assessment” regarding the extent of redundancy in ViTs, and how much we could gain by thoroughly mitigating such, has been absent for this field.
This paper, for the first time, systematically studies the ubiquitous existence of redundancy at all three levels: patch embedding, attention map, and weight space. In view of them, we advocate a principle of diversity for training ViTs, by presenting corresponding regularizers that encourage the representation diversity and coverage at each of those levels, that enabling capturing more discriminative information.
Extensive experiments on ImageNet with a number of ViT backbones validate the effectiveness of our proposals, largely eliminating the observed ViT redundancy and significantly boosting the model generalization. For example, our diversified DeiT obtains 0.70% ∼1.76% accuracy boosts on ImageNet with highly reduced similarity.
## Prerequisites
Install PyTorch 1.7.0+ and torchvision 0.8.1+ and [pytorch-image-models 0.3.2](https://github.com/rwightman/pytorch-image-models):
```
conda install -c pytorch torchvision
pip install timm==0.3.2
```## Training on ImageNet
```
./script/run_deit_small_diverse.sh [data/imagenet] (Deit-Small-12layers)
./script/run_deit_small_24layer_diverse.sh [data/imagenet] (Deit-Small-24layers)
```## Citation
```
TBD
```## Acknowledgement
https://github.com/facebookresearch/deit