awesome-visual-attention
A curated list of visual attention modules
https://github.com/hongyuanyu/awesome-visual-attention
Last synced: 18 days ago
JSON representation
-
Papers
-
Efficient Vision Transformer
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - Vision-Transformer)]
- [Paper - zvg/SOFT)]
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - longformer)]
- [Paper
- [Paper
- [Paper - Group/SViTE)]
- [Paper
- [Paper - Transformer)]
- [Paper - research.xyz/)]
- [Paper
- [Paper
- [Paper - zvg/SOFT)][[Website](https://fudan-zvg.github.io/SOFT/)]
- [Paper - red/)]
- [Paper
- [Paper - Labs/Compact-Transformers)]
- [Paper
- [Paper
- [Paper
- [Paper - ls)]
- [Paper
- [Paper
- [Paper
- [Paper - Net)]
- [Paper
- [Paper
- [Paper - ViT)]
- [Paper - IDL/PaddleViT)]
- [Paper
- [Paper
- [Paper
- [Paper - Group/ViT-Anti-Oversmoothing)]
- [Paper
- [Paper - Yang/LVT)]
- [Paper - vit.github.io/)]
- [Paper
- [Paper - 1](https://github.com/karttikeya/minREV)][[PyTorch-2](https://github.com/facebookresearch/slowfast)]
- [Paper
- [Paper
- [Paper
- [Paper - fi/edgevit)]
- [Paper
- [Paper - X/SiT)]
- [Paper
- [Paper - Group/M3ViT)]
- [Paper
- [Paper
- [Paper - research/EfficientFormer)]
- [Paper - noah/Efficient-AI-Backbones/tree/master/ghostnetv2_pytorch)]
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - ViT)]
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - Labs/Neighborhood-Attention-Transformer)]
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - Level-ViT)]
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper
-
Conv + Transformer
- [Paper
- [Paper
- [Paper
- [Paper - ucsd/CoaT)]
- [Paper
- [Paper
- [Paper - hao-tian/ConTNet)]
- [Paper
- [Paper - cvnets)]
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - for-small-scale-datasets)]
- [Paper
- [Paper - sg/iFormer)]
- [Paper
- [Paper
- [Paper
- [Paper - cvnets)]
- [Paper - X/UniFormer)]
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - sg/metaformer)]
- [Paper - Evaluation)]]
- [Paper
- [Paper
- [Paper - Attention-Network)]
- [Paper - Q/SDMAE)]
- [Paper
- [Paper - tian/SparK)]
- [Paper - research/deeplab2)]
-
-
Mix Domain
-
Conv + Transformer
- Split-Attention Networks - image-models/blob/master/timm/models/resnest.py)|:x:|:x:|:x:|
- CBAM
- AA-Nets - Attention-Network/blob/master/double_attention_layer.py)|:heavy_check_mark:|:x:|:x:|
- Split-Attention Networks - image-models/blob/master/timm/models/resnest.py)|:x:|:x:|:x:|
- AA-Nets - Attention-Network/blob/master/double_attention_layer.py)|:heavy_check_mark:|:x:|:x:|
-
-
Channel Domain
-
Conv + Transformer
- Squeeze-and-Excitation Networks
- Effective Squeeze-Excitation
- SKNet
- FcaNet
- Triplet Attention - attention/blob/master/MODELS/triplet_attention.py)|:x:|7.88|0.0003|
- ECA-Net - 6|
-
-
Spatial Domain
-
Conv + Transformer
- Non-local Neural Networks - local_pytorch)|:heavy_check_mark:|425.49|0.00848|
- SAGAN
- ISA - group/openseg.pytorch/blob/master/lib/models/modules/isa_block.py)|:heavy_check_mark:|:x:|:x:|
- ISA - group/openseg.pytorch/blob/master/lib/models/modules/isa_block.py)|:heavy_check_mark:|:x:|:x:|
-
-
Lightweight Transformer Operater