Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/facebookresearch/deit
Official DeiT repository
https://github.com/facebookresearch/deit
Last synced: about 1 month ago
JSON representation
Official DeiT repository
- Host: GitHub
- URL: https://github.com/facebookresearch/deit
- Owner: facebookresearch
- License: apache-2.0
- Archived: true
- Created: 2020-12-23T17:44:43.000Z (almost 4 years ago)
- Default Branch: main
- Last Pushed: 2024-03-15T13:20:37.000Z (8 months ago)
- Last Synced: 2024-09-21T16:17:22.612Z (about 2 months ago)
- Language: Python
- Size: 5.56 MB
- Stars: 4,009
- Watchers: 48
- Forks: 550
- Open Issues: 20
-
Metadata Files:
- Readme: README.md
- Contributing: .github/CONTRIBUTING.md
- License: LICENSE
- Code of conduct: .github/CODE_OF_CONDUCT.md
- Dei: .github/deit.png
Awesome Lists containing this project
- awesome-image-classification - official-pytorch: https://github.com/facebookresearch/deit
- awesome_vision_transformer - code
- awesome_vision_transformer - code
README
# Data-Efficient architectures and training for Image classification
This repository contains PyTorch evaluation code, training code and pretrained models for the following papers:
DeiT Data-Efficient Image Transformers, ICML 2021 [bib]
```
@InProceedings{pmlr-v139-touvron21a,
title = {Training data-efficient image transformers & distillation through attention},
author = {Touvron, Hugo and Cord, Matthieu and Douze, Matthijs and Massa, Francisco and Sablayrolles, Alexandre and Jegou, Herve},
booktitle = {International Conference on Machine Learning},
pages = {10347--10357},
year = {2021},
volume = {139},
month = {July}
}
```CaiT (Going deeper with Image Transformers), ICCV 2021 [bib]
```
@InProceedings{Touvron_2021_ICCV,
author = {Touvron, Hugo and Cord, Matthieu and Sablayrolles, Alexandre and Synnaeve, Gabriel and J\'egou, Herv\'e},
title = {Going Deeper With Image Transformers},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {32-42}
}
```ResMLP (ResMLP: Feedforward networks for image classification with data-efficient training), TPAMI 2022 [bib]
```
@article{touvron2021resmlp,
title={ResMLP: Feedforward networks for image classification with data-efficient training},
author={Hugo Touvron and Piotr Bojanowski and Mathilde Caron and Matthieu Cord and Alaaeldin El-Nouby and Edouard Grave and Gautier Izacard and Armand Joulin and Gabriel Synnaeve and Jakob Verbeek and Herv'e J'egou},
journal={arXiv preprint arXiv:2105.03404},
year={2021},
}
```PatchConvnet (Augmenting Convolutional networks with attention-based aggregation) [bib]
```
@article{touvron2021patchconvnet,
title={Augmenting Convolutional networks with attention-based aggregation},
author={Hugo Touvron and Matthieu Cord and Alaaeldin El-Nouby and Piotr Bojanowski and Armand Joulin and Gabriel Synnaeve and Jakob Verbeek and Herve Jegou},
journal={arXiv preprint arXiv:2112.13692},
year={2021},
}
```3Things (Three things everyone should know about Vision Transformers), ECCV 2022 [bib]
```
@article{Touvron2022ThreeTE,
title={Three things everyone should know about Vision Transformers},
author={Hugo Touvron and Matthieu Cord and Alaaeldin El-Nouby and Jakob Verbeek and Herve Jegou},
journal={arXiv preprint arXiv:2203.09795},
year={2022},
}
```DeiT III (DeiT III: Revenge of the ViT), ECCV 2022 [bib]
```
@article{Touvron2022DeiTIR,
title={DeiT III: Revenge of the ViT},
author={Hugo Touvron and Matthieu Cord and Herve Jegou},
journal={arXiv preprint arXiv:2204.07118},
year={2022},
}
```Cosub (Co-training 2L Submodels for Visual Recognition), CVPR 2023 [bib]
```
@article{Touvron2022Cotraining2S,
title={Co-training 2L Submodels for Visual Recognition},
author={Hugo Touvron and Matthieu Cord and Maxime Oquab and Piotr Bojanowski and Jakob Verbeek and Herv'e J'egou},
journal={arXiv preprint arXiv:2212.04884},
year={2022},
}
```If you find this repository useful, please consider giving a star ⭐ and cite the relevant papers.
# License
This repository is released under the Apache 2.0 license as found in the [LICENSE](LICENSE) file.# Contributing
We actively welcome your pull requests! Please see [CONTRIBUTING.md](.github/CONTRIBUTING.md) and [CODE_OF_CONDUCT.md](.github/CODE_OF_CONDUCT.md) for more info.