Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/Westlake-AI/Awesome-Mixup
Awesome List of Mixup Augmentation Papers for Visual Representation Learning
https://github.com/Westlake-AI/Awesome-Mixup
List: Awesome-Mixup
awesome-list awesome-mixup computer-vision data-augmentation deep-learning image-classification mixup self-supervised-learning
Last synced: 16 days ago
JSON representation
Awesome List of Mixup Augmentation Papers for Visual Representation Learning
- Host: GitHub
- URL: https://github.com/Westlake-AI/Awesome-Mixup
- Owner: Westlake-AI
- License: apache-2.0
- Created: 2022-09-20T18:50:55.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2024-04-30T22:08:12.000Z (8 months ago)
- Last Synced: 2024-05-23T04:04:14.753Z (7 months ago)
- Topics: awesome-list, awesome-mixup, computer-vision, data-augmentation, deep-learning, image-classification, mixup, self-supervised-learning
- Homepage: https://openmixup.readthedocs.io/en/latest/awesome_mixups/Mixup_SL.html
- Size: 313 KB
- Stars: 86
- Watchers: 5
- Forks: 7
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Contributing: .github/CONTRIBUTING.md
- License: LICENSE
Awesome Lists containing this project
- ultimate-awesome - Awesome-Mixup - Awesome List of Mixup Augmentation Papers for Visual Representation Learning. (Other Lists / Monkey C Lists)
README
# Awesome-Mixup
[![Awesome](https://awesome.re/badge.svg)](https://awesome.re) ![GitHub stars](https://img.shields.io/github/stars/Westlake-AI/Awesome-Mixup?color=green) ![GitHub forks](https://img.shields.io/github/forks/Westlake-AI/Awesome-Mixup?color=yellow&label=Fork)
Welcome to Awesome-Mixup, a carefully curated survey of **Mixup** algorithms implemented in the PyTorch library, aiming to meet various needs of the research community. **Mixup** is a kind of methods that focus on alleviating model overfitting and poor generalization. As a *"data-centric"* way, Mixup can be applied to various training paradigms and data modalities.
If this repository has been helpful to you, please consider giving it a ⭐️ to show your support. Your support helps us reach more researchers and contributes to the growth of this resource. Thank you!
## Introduction
**We summarize awesome mixup data augmentation methods for visual representation learning in various scenarios from 2018 to 2024.**
The list of awesome mixup augmentation methods is summarized in chronological order and is on updating. The main branch is modified according to [Awesome-Mixup](https://github.com/Westlake-AI/openmixup/docs/en/awesome_mixups) in [OpenMixup](https://github.com/Westlake-AI/openmixup) and [Awesome-Mix](https://github.com/ChengtaiCao/Awesome-Mix), and we are working on a comperhensive survey on mixup augmentations. You can read our survey: [**A Survey on Mixup Augmentations and Beyond**](https://arxiv.org/abs/2409.05202) see more detailed information.
* To find related papers and their relationships, check out [Connected Papers](https://www.connectedpapers.com/), which visualizes the academic field in a graph representation.
* To export BibTeX citations of papers, check out [ArXiv](https://arxiv.org/) or [Semantic Scholar](https://www.semanticscholar.org/) of the paper for professional reference formats.
## Figuer of Contents
You can see the figuer of mixup augmentation methods deirtly that we summarized.
## Table of Contents
Table of Contents
Sample Mixup Policies in SL
- Static Linear
- Feature-based
- Cutting-based
- K Samples Mixup
- Random Policies
- Style-based
- Saliency-based
- Attention-based
- Generating Samples
Label Mixup Policies in SL
- Optimizing Calibration
- Area-based
- Loss Object
- Random Label Policies
- Optimizing Mixing Ratio
- Generating Label
- Attention Score
- Saliency Token
Self-Supervised Learning
Semi-Supervised Learning
CV Downstream Tasks
Training Paradigms
- Federated Learning
- Adversarial Attack and Adversarial Training
- Domain Adaption
- Knowledge Distillation
- Multi Modal
Beyond Vision
- Analysis and Theorem
- Survey
- Benchmark
- Classification Results on Datasets
- Related Datasets Link
- Contribution
- License
- Acknowledgement
- Related Project
### Sample Mixup Policies in SL
#### **Static Linear**
* **mixup: Beyond Empirical Risk Minimization**
*Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz*
ICLR'2018 [[Paper](https://arxiv.org/abs/1710.09412)]
[[Code](https://github.com/facebookresearch/mixup-cifar10)]
MixUp Framework
* **Between-class Learning for Image Classification**
*Yuji Tokozume, Yoshitaka Ushiku, Tatsuya Harada*
CVPR'2018 [[Paper](https://arxiv.org/abs/1711.10284)]
[[Code](https://github.com/mil-tokyo/bc_learning_image)]
BC Framework
* **Preventing Manifold Intrusion with Locality: Local Mixup**
*Raphael Baena, Lucas Drumetz, Vincent Gripon*
EUSIPCO'2022 [[Paper](https://arxiv.org/abs/2201.04368)]
LocalMixup Framework
* **AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty**
*Dan Hendrycks, Norman Mu, Ekin D. Cubuk, Barret Zoph, Justin Gilmer, Balaji Lakshminarayanan*
ICLR'2020 [[Paper](https://arxiv.org/abs/1912.02781)]
[[Code](https://github.com/google-research/augmix)]
AugMix Framework
* **DJMix: Unsupervised Task-agnostic Augmentation for Improving Robustness**
*Ryuichiro Hataya, Hideki Nakayama*
arXiv'2021 [[Paper](https://openreview.net/pdf?id=0n3BaVlNsHI)]
DJMix Framework
* **PixMix: Dreamlike Pictures Comprehensively Improve Safety Measures**
*Dan Hendrycks, Andy Zou, Mantas Mazeika, Leonard Tang, Bo Li, Dawn Song, Jacob Steinhardt*
CVPR'2022 [[Paper](https://arxiv.org/abs/2112.05135)]
[[Code](https://github.com/andyzoujm/pixmix)]
PixMix Framework
* **IPMix: Label-Preserving Data Augmentation Method for Training Robust Classifiers**
*Zhenglin Huang, Xiaoan Bao, Na Zhang, Qingqi Zhang, Xiaomei Tu, Biao Wu, Xi Yang*
NIPS'2023 [[Paper](https://arxiv.org/abs/2310.04780)]
[[Code](https://github.com/hzlsaber/IPMix)]
IPMix Framework
#### **Feature-based*** **Manifold Mixup: Better Representations by Interpolating Hidden States**
*Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, David Lopez-Paz, Yoshua Bengio*
ICML'2019 [[Paper](https://arxiv.org/abs/1806.05236)]
[[Code](https://github.com/vikasverma1077/manifold_mixup)]
ManifoldMix Framework
* **PatchUp: A Regularization Technique for Convolutional Neural Networks**
*Mojtaba Faramarzi, Mohammad Amini, Akilesh Badrinaaraayanan, Vikas Verma, Sarath Chandar*
arXiv'2020 [[Paper](https://arxiv.org/abs/2006.07794)]
[[Code](https://github.com/chandar-lab/PatchUp)]
PatchUp Framework
* **On Feature Normalization and Data Augmentation**
*Boyi Li, Felix Wu, Ser-Nam Lim, Serge Belongie, Kilian Q. Weinberger*
CVPR'2021 [[Paper](https://arxiv.org/abs/2002.11102)]
[[Code](https://github.com/Boyiliee/MoEx)]
MoEx Framework
* **Catch-Up Mix: Catch-Up Class for Struggling Filters in CNN**
*Minsoo Kang, Minkoo Kang, Suhyun Kim*
AAAI'2024 [[Paper](https://arxiv.org/abs/2401.13193)]
Catch-Up-Mix Framework
#### **Cutting-based**
* **CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features**
*Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, Youngjoon Yoo*
ICCV'2019 [[Paper](https://arxiv.org/abs/1905.04899)]
[[Code](https://github.com/clovaai/CutMix-PyTorch)]
CutMix Framework
* **Improved Mixed-Example Data Augmentation**
*Cecilia Summers, Michael J. Dinneen*
WACV'2019 [[Paper](https://arxiv.org/abs/1805.11272)]
[[Code](https://github.com/ceciliaresearch/MixedExample)]
MixedExamples Framework
* **Patch-level Neighborhood Interpolation: A General and Effective Graph-based Regularization Strategy**
*Ke Sun, Bing Yu, Zhouchen Lin, Zhanxing Zhu*
arXiv'2019 [[Paper](https://arxiv.org/abs/1911.09307)]
Pani VAT Framework
* **FMix: Enhancing Mixed Sample Data Augmentation**
*Ethan Harris, Antonia Marcu, Matthew Painter, Mahesan Niranjan, Adam Prügel-Bennett, Jonathon Hare*
arXiv'2020 [[Paper](https://arxiv.org/abs/2002.12047)]
[[Code](https://github.com/ecs-vlc/FMix)]
FMix Framework
* **SmoothMix: a Simple Yet Effective Data Augmentation to Train Robust Classifiers**
*Jin-Ha Lee, Muhammad Zaigham Zaheer, Marcella Astrid, Seung-Ik Lee*
CVPRW'2020 [[Paper](https://openaccess.thecvf.com/content_CVPRW_2020/html/w45/Lee_SmoothMix_A_Simple_Yet_Effective_Data_Augmentation_to_Train_Robust_CVPRW_2020_paper.html)]
[[Code](https://github.com/Westlake-AI/openmixup)]
SmoothMix Framework
* **GridMix: Strong regularization through local context mapping**
*Kyungjune Baek, Duhyeon Bang, Hyunjung Shim*
Pattern Recognition'2021 [[Paper](https://www.sciencedirect.com/science/article/pii/S0031320320303976)]
[[Code](https://github.com/IlyaDobrynin/GridMixup)]
GridMixup Framework
* **ResizeMix: Mixing Data with Preserved Object Information and True Labels**
*Jie Qin, Jiemin Fang, Qian Zhang, Wenyu Liu, Xingang Wang, Xinggang Wang*
arXiv'2020 [[Paper](https://arxiv.org/abs/2012.11101)]
[[Code](https://github.com/Westlake-AI/openmixup)]
ResizeMix Framework
* **StackMix: A complementary Mix algorithm**
*John Chen, Samarth Sinha, Anastasios Kyrillidis*
UAI'2022 [[Paper](https://arxiv.org/abs/2011.12618)]
StackMix Framework
* **SuperpixelGridCut, SuperpixelGridMean and SuperpixelGridMix Data Augmentation**
*Karim Hammoudi, Adnane Cabani, Bouthaina Slika, Halim Benhabiles, Fadi Dornaika, Mahmoud Melkemi*
arXiv'2022 [[Paper](https://arxiv.org/abs/2204.08458)]
[[Code](https://github.com/hammoudiproject/SuperpixelGridMasks)]
SuperpixelGridCut Framework
* **A Unified Analysis of Mixed Sample Data Augmentation: A Loss Function Perspective**
*Chanwoo Park, Sangdoo Yun, Sanghyuk Chun*
NIPS'2022 [[Paper](https://arxiv.org/abs/2208.09913)]
[[Code](https://github.com/naver-ai/hmix-gmix)]
MSDA Framework
* **You Only Cut Once: Boosting Data Augmentation with a Single Cut**
*Junlin Han, Pengfei Fang, Weihao Li, Jie Hong, Mohammad Ali Armin, Ian Reid, Lars Petersson, Hongdong Li*
ICML'2022 [[Paper](https://arxiv.org/abs/2201.12078)]
[[Code](https://github.com/JunlinHan/YOCO)]
YOCO Framework
* **StarLKNet: Star Mixup with Large Kernel Networks for Palm Vein Identification**
*Xin Jin, Hongyu Zhu, Mounîm A.El Yacoubi, Hongchao Liao, Huafeng Qin, Yun Jiang*
arXiv'2024 [[Paper](https://arxiv.org/abs/2405.12721)]
StarMix Framework
#### **K Samples Mixup**
* **You Only Look Once: Unified, Real-Time Object Detection**
*Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi*
CVPR'2016 [[Paper](https://arxiv.org/abs/1506.02640)]
[[Code](https://pjreddie.com/darknet/yolo/#google_vignette)]
Mosaic
* **Data Augmentation using Random Image Cropping and Patching for Deep CNNs**
*Ryo Takahashi, Takashi Matsubara, Kuniaki Uehara*
IEEE TCSVT'2020 [[Paper](https://arxiv.org/abs/1811.09030)]
RICAP
* **k-Mixup Regularization for Deep Learning via Optimal Transport**
*Kristjan Greenewald, Anming Gu, Mikhail Yurochkin, Justin Solomon, Edward Chien*
arXiv'2021 [[Paper](https://arxiv.org/abs/2106.02933)]
k-Mixup Framework
* **Observations on K-image Expansion of Image-Mixing Augmentation for Classification**
*Joonhyun Jeong, Sungmin Cha, Youngjoon Yoo, Sangdoo Yun, Taesup Moon, Jongwon Choi*
IEEE Access'2021 [[Paper](https://arxiv.org/abs/2110.04248)]
[[Code](https://github.com/yjyoo3312/DCutMix-PyTorch)]
DCutMix Framework
* **MixMo: Mixing Multiple Inputs for Multiple Outputs via Deep Subnetworks**
*Alexandre Rame, Remy Sun, Matthieu Cord*
ICCV'2021 [[Paper](https://arxiv.org/abs/2103.06132)]
MixMo Framework
* **Cut-Thumbnail: A Novel Data Augmentation for Convolutional Neural Network**
*Tianshu Xie, Xuan Cheng, Minghui Liu, Jiali Deng, Xiaomin Wang, Ming Liu*
ACM MM;2021 [[Paper](https://arxiv.org/abs/2103.05342)]
Cut-Thumbnail
#### **Random Policies**
* **RandomMix: A mixed sample data augmentation method with multiple mixed modes**
*Xiaoliang Liu, Furao Shen, Jian Zhao, Changhai Nie*
arXiv'2022 [[Paper](https://arxiv.org/abs/2205.08728)]
RandomMix Framework
* **AugRmixAT: A Data Processing and Training Method for Improving Multiple Robustness and Generalization Performance**
*Xiaoliang Liu, Furao Shen, Jian Zhao, Changhai Nie*
ICME'2022 [[Paper](https://arxiv.org/abs/2207.10290)]
AugRmixAT Framework
#### **Style-based**
* **StyleMix: Separating Content and Style for Enhanced Data Augmentation**
*Minui Hong, Jinwoo Choi, Gunhee Kim*
CVPR'2021 [[Paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Hong_StyleMix_Separating_Content_and_Style_for_Enhanced_Data_Augmentation_CVPR_2021_paper.pdf)]
[[Code](https://github.com/alsdml/StyleMix)]
StyleMix Framework
* **Domain Generalization with MixStyle**
*Kaiyang Zhou, Yongxin Yang, Yu Qiao, Tao Xiang*
ICLR'2021 [[Paper](https://openreview.net/forum?id=6xHJ37MVxxp)]
[[Code](https://github.com/KaiyangZhou/mixstyle-release)]
MixStyle Framework
* **AlignMix: Improving representation by interpolating aligned features**
*Shashanka Venkataramanan, Ewa Kijak, Laurent Amsaleg, Yannis Avrithis*
CVPR'2022 [[Paper](https://arxiv.org/abs/2103.15375)]
[[Code](https://github.com/shashankvkt/AlignMixup_CVPR22)]
AlignMixup Framework
* **Embedding Space Interpolation Beyond Mini-Batch, Beyond Pairs and Beyond Examples**
*Shashanka Venkataramanan, Ewa Kijak, Laurent Amsaleg, Yannis Avrithis*
NIPS'2023 [[Paper](https://arxiv.org/abs/2206.14868)]
MultiMix Framework
#### **Saliency-based**
* **SaliencyMix: A Saliency Guided Data Augmentation Strategy for Better Regularization**
*A F M Shahab Uddin and Mst. Sirazam Monira and Wheemyung Shin and TaeChoong Chung and Sung-Ho Bae*
ICLR'2021 [[Paper](https://arxiv.org/abs/2006.01791)]
[[Code](https://github.com/SaliencyMix/SaliencyMix)]
SaliencyMix Framework
* **Attentive CutMix: An Enhanced Data Augmentation Approach for Deep Learning Based Image Classification**
*Devesh Walawalkar, Zhiqiang Shen, Zechun Liu, Marios Savvides*
ICASSP'2020 [[Paper](https://arxiv.org/abs/2003.13048)]
[[Code](https://github.com/xden2331/attentive_cutmix)]
AttentiveMix Framework
* **SnapMix: Semantically Proportional Mixing for Augmenting Fine-grained Data**
*Shaoli Huang, Xinchao Wang, Dacheng Tao*
AAAI'2021 [[Paper](https://arxiv.org/abs/2012.04846)]
[[Code](https://github.com/Shaoli-Huang/SnapMix)]
SnapMix Framework
* **Attribute Mix: Semantic Data Augmentation for Fine Grained Recognition**
*Hao Li, Xiaopeng Zhang, Hongkai Xiong, Qi Tian*
VCIP'2020 [[Paper](https://arxiv.org/abs/2004.02684)]
AttributeMix Framework
* **Where to Cut and Paste: Data Regularization with Selective Features**
*Jiyeon Kim, Ik-Hee Shin, Jong-Ryul, Lee, Yong-Ju Lee*
ICTC'2020 [[Paper](https://ieeexplore.ieee.org/abstract/document/9289404)]
[[Code](https://github.com/google-research/augmix)]
FocusMix Framework
* **PuzzleMix: Exploiting Saliency and Local Statistics for Optimal Mixup**
*Jang-Hyun Kim, Wonho Choo, Hyun Oh Song*
ICML'2020 [[Paper](https://arxiv.org/abs/2009.06962)]
[[Code](https://github.com/snu-mllab/PuzzleMix)]
PuzzleMix Framework
* **Co-Mixup: Saliency Guided Joint Mixup with Supermodular Diversity**
*Jang-Hyun Kim, Wonho Choo, Hosan Jeong, Hyun Oh Song*
ICLR'2021 [[Paper](https://arxiv.org/abs/2102.03065)]
[[Code](https://github.com/snu-mllab/Co-Mixup)]
Co-Mixup Framework
* **SuperMix: Supervising the Mixing Data Augmentation**
*Ali Dabouei, Sobhan Soleymani, Fariborz Taherkhani, Nasser M. Nasrabadi*
CVPR'2021 [[Paper](https://arxiv.org/abs/2003.05034)]
[[Code](https://github.com/alldbi/SuperMix)]
SuperMix Framework
* **AutoMix: Unveiling the Power of Mixup for Stronger Classifiers**
*Zicheng Liu, Siyuan Li, Di Wu, Zihan Liu, Zhiyuan Chen, Lirong Wu, Stan Z. Li*
ECCV'2022 [[Paper](https://arxiv.org/abs/2103.13027)]
[[Code](https://github.com/Westlake-AI/openmixup)]
AutoMix Framework
* **Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup**
*Siyuan Li, Zicheng Liu, Di Wu, Zihan Liu, Stan Z. Li*
arXiv'2021 [[Paper](https://arxiv.org/abs/2111.15454)]
[[Code](https://github.com/Westlake-AI/openmixup)]
SAMix Framework
* **RecursiveMix: Mixed Learning with History**
*Lingfeng Yang, Xiang Li, Borui Zhao, Renjie Song, Jian Yang*
NIPS'2022 [[Paper](https://arxiv.org/abs/2203.06844)]
[[Code](https://github.com/implus/RecursiveMix-pytorch)]
RecursiveMix Framework
* **TransformMix: Learning Transformation and Mixing Strategies for Sample-mixing Data Augmentation**
*Tsz-Him Cheung, Dit-Yan Yeung*
OpenReview'2023 [[Paper](https://openreview.net/forum?id=-1vpxBUtP0B)]
TransformMix Framework
* **GuidedMixup: An Efficient Mixup Strategy Guided by Saliency Maps**
*Minsoo Kang, Suhyun Kim*
AAAI'2023 [[Paper](https://arxiv.org/abs/2306.16612)]
[[Code](https://github.com/3neutronstar/GuidedMixup)]
GuidedMixup Framework
* **GradSalMix: Gradient Saliency-Based Mix for Image Data Augmentation**
*Tao Hong, Ya Wang, Xingwu Sun, Fengzong Lian, Zhanhui Kang, Jinwen Ma*
ICME'2023 [[Paper](https://ieeexplore.ieee.org/abstract/document/10219625)]
GradSalMix Framework
* **LGCOAMix: Local and Global Context-and-Object-Part-Aware Superpixel-Based Data Augmentation for Deep Visual Recognition**
*Fadi Dornaika, Danyang Sun*
TIP'2023 [[Paper](https://ieeexplore.ieee.org/document/10348509)]
[[Code](https://github.com/DanielaPlusPlus/LGCOAMix)]
LGCOAMix Framework
* **Adversarial AutoMixup**
*Huafeng Qin, Xin Jin, Yun Jiang, Mounim A. El-Yacoubi, Xinbo Gao*
ICLR'2024 [[Paper](https://arxiv.org/abs/2312.11954)]
[[Code](https://github.com/jinxins/adversarial-automixup)]
AdAutoMix Framework
#### **Attention-based**
* **TokenMix: Rethinking Image Mixing for Data Augmentation in Vision Transformers**
*Jihao Liu, Boxiao Liu, Hang Zhou, Hongsheng Li, Yu Liu*
ECCV'2022 [[Paper](https://arxiv.org/abs/2207.08409)]
[[Code](https://github.com/Sense-X/TokenMix)]
TokenMix Framework
* **TokenMixup: Efficient Attention-guided Token-level Data Augmentation for Transformers**
*Hyeong Kyu Choi, Joonmyung Choi, Hyunwoo J. Kim*
NIPS'2022 [[Paper](https://arxiv.org/abs/2210.07562)]
[[Code](https://github.com/mlvlab/TokenMixup)]
TokenMixup Framework
* **ScoreNet: Learning Non-Uniform Attention and Augmentation for Transformer-Based Histopathological Image Classification**
*Thomas Stegmüller, Behzad Bozorgtabar, Antoine Spahr, Jean-Philippe Thiran*
WACV'2023 [[Paper](https://arxiv.org/abs/2202.07570)]
ScoreMix Framework
* **MixPro: Data Augmentation with MaskMix and Progressive Attention Labeling for Vision Transformer**
*Qihao Zhao, Yangyu Huang, Wei Hu, Fan Zhang, Jun Liu*
ICLR'2023 [[Paper](https://openreview.net/forum?id=dRjWsd3gwsm)]
[[Code](https://github.com/fistyee/MixPro)]
MixPro Framework
* **SMMix: Self-Motivated Image Mixing for Vision Transformers**
*Mengzhao Chen, Mingbao Lin, ZhiHang Lin, Yuxin Zhang, Fei Chao, Rongrong Ji*
ICCV'2023 [[Paper](https://arxiv.org/abs/2212.12977)]
[[Code](https://github.com/chenmnz/smmix)]
SMMix Framework
#### **Generating Samples**
* **Data Augmentation via Latent Space Interpolation for Image Classification**
*Xiaofeng Liu, Yang Zou, Lingsheng Kong, Zhihui Diao, Junliang Yan, Jun Wang, Site Li, Ping Jia, Jane You
ICPR'2018 [[Paper](https://ieeexplore.ieee.org/abstract/document/8545506)]
AEE Framework
* **On Adversarial Mixup Resynthesis**
*Christopher Beckham, Sina Honari, Vikas Verma, Alex Lamb, Farnoosh Ghadiri, R Devon Hjelm, Yoshua Bengio, Christopher Pal*
NIPS'2019 [[Paper](https://arxiv.org/abs/1903.02709)]
[[Code](https://github.com/christopher-beckham/amr)]
AMR Framework
* **AutoMix: Mixup Networks for Sample Interpolation via Cooperative Barycenter Learning**
*Jianchao Zhu, Liangliang Shi, Junchi Yan, Hongyuan Zha*
ECCV'2020 [[Paper](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123550630.pdf)]
AutoMix Framework
* **VarMixup: Exploiting the Latent Space for Robust Training and Inference**
*Puneet Mangla, Vedant Singh, Shreyas Jayant Havaldar, Vineeth N Balasubramanian*
CVPRW'2021 [[Paper](https://arxiv.org/abs/2003.06566v1)]
VarMixup Framework
* **DiffuseMix: Label-Preserving Data Augmentation with Diffusion Models**
*Khawar Islam, Muhammad Zaigham Zaheer, Arif Mahmood, Karthik Nandakumar*
CVPR'2024 [[Paper](https://arxiv.org/abs/2405.14881)]
[[Code](https://github.com/khawar-islam/diffuseMix)]
DiffuseMix Framework
### Label Mixup Policies in SL
### **Optimizing Calibration**
* **Combining Ensembles and Data Augmentation can Harm your Calibration**
*Yeming Wen, Ghassen Jerfel, Rafael Muller, Michael W. Dusenberry, Jasper Snoek, Balaji Lakshminarayanan, Dustin Tran*
ICLR'2021 [[Paper](https://arxiv.org/abs/2010.09875)]
[[Code](https://github.com/google/edward2/tree/main/experimental/marginalization_mixup)]
CAMix Framework
* **RankMixup: Ranking-Based Mixup Training for Network Calibration**
*Jongyoun Noh, Hyekang Park, Junghyup Lee, Bumsub Ham*
ICCV'2023 [[Paper](https://arxiv.org/abs/2308.11990)]
[[Code](https://cvlab.yonsei.ac.kr/projects/RankMixup)]
RankMixup Framework
* **SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness**
*Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, Doguk Kim, Jinwoo Shin*
NIPS'2021 [[Paper](https://arxiv.org/abs/2111.09277)]
[[Code](https://github.com/jh-jeong/smoothmix)]
SmoothMixup Framework
### **Area-based**
* **TransMix: Attend to Mix for Vision Transformers**
*Jie-Neng Chen, Shuyang Sun, Ju He, Philip Torr, Alan Yuille, Song Bai*
CVPR'2022 [[Paper](https://arxiv.org/abs/2111.09833)]
[[Code](https://github.com/Beckschen/TransMix)]
TransMix Framework
* **Data Augmentation using Random Image Cropping and Patching for Deep CNNs**
*Ryo Takahashi, Takashi Matsubara, Kuniaki Uehara*
IEEE TCSVT'2020 [[Paper](https://arxiv.org/abs/1811.09030)]
RICAP
* **RecursiveMix: Mixed Learning with History**
*Lingfeng Yang, Xiang Li, Borui Zhao, Renjie Song, Jian Yang*
NIPS'2022 [[Paper](https://arxiv.org/abs/2203.06844)]
[[Code](https://github.com/implus/RecursiveMix-pytorch)]
RecursiveMix Framework
### **Loss Object**
* **Harnessing Hard Mixed Samples with Decoupled Regularizer**
*Zicheng Liu, Siyuan Li, Ge Wang, Cheng Tan, Lirong Wu, Stan Z. Li*
NIPS'2023 [[Paper](https://arxiv.org/abs/2203.10761)]
[[Code](https://github.com/Westlake-AI/openmixup)]
DecoupledMix Framework
* **MixupE: Understanding and Improving Mixup from Directional Derivative Perspective**
*Vikas Verma, Sarthak Mittal, Wai Hoh Tang, Hieu Pham, Juho Kannala, Yoshua Bengio, Arno Solin, Kenji Kawaguchi*
UAI'2023 [[Paper](https://arxiv.org/abs/2212.13381)]
[[Code](https://github.com/onehuster/mixupe)]
MixupE Framework
### **Random Label Policies**
* **Mixup Without Hesitation**
*Hao Yu, Huanyu Wang, Jianxin Wu*
ICIG'2022 [[Paper](https://arxiv.org/abs/2101.04342)]
[[Code](https://github.com/yuhao318/mwh)]* **RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and Out Distribution Robustness**
*Francesco Pinto, Harry Yang, Ser-Nam Lim, Philip H.S. Torr, Puneet K. Dokania*
NIPS'2022 [[Paper](https://arxiv.org/abs/2206.14502)]
[[Code](https://github.com/FrancescoPinto/RegMixup)]
RegMixup Framework
### **Optimizing Mixing Ratio**
* **MixUp as Locally Linear Out-Of-Manifold Regularization**
*Hongyu Guo, Yongyi Mao, Richong Zhang*
AAAI'2019 [[Paper](https://arxiv.org/abs/1809.02499)]
AdaMixup Framework
* **RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and Out Distribution Robustness**
*Francesco Pinto, Harry Yang, Ser-Nam Lim, Philip H.S. Torr, Puneet K. Dokania*
NIPS'2022 [[Paper](https://arxiv.org/abs/2206.14502)]
[[Code](https://github.com/FrancescoPinto/RegMixup)]
RegMixup Framework
* **Metamixup: Learning adaptive interpolation policy of mixup with metalearning**
*Zhijun Mai, Guosheng Hu, Dexiong Chen, Fumin Shen, Heng Tao Shen*
IEEE TNNLS'2021 [[Paper](https://arxiv.org/abs/1908.10059)]
MetaMixup Framework
* **LUMix: Improving Mixup by Better Modelling Label Uncertainty**
*Shuyang Sun, Jie-Neng Chen, Ruifei He, Alan Yuille, Philip Torr, Song Bai*
ICASSP'2024 [[Paper](https://arxiv.org/abs/2211.15846)]
[[Code](https://github.com/kevin-ssy/LUMix)]
LUMix Framework
* **SUMix: Mixup with Semantic and Uncertain Information**
*Huafeng Qin, Xin Jin, Hongyu Zhu, Hongchao Liao, Mounîm A. El-Yacoubi, Xinbo Gao*
ECCV'2024 [[Paper](https://arxiv.org/abs/2407.07805)]
[[Code](https://github.com/JinXins/SUMix)]
SUMix Framework
### **Generating Label**
* **GenLabel: Mixup Relabeling using Generative Models**
*Jy-yong Sohn, Liang Shang, Hongxu Chen, Jaekyun Moon, Dimitris Papailiopoulos, Kangwook Lee*
ICML'2022 [[Paper](https://arxiv.org/abs/2201.02354)]
GenLabel Framework
### **Attention Score**
* **All Tokens Matter: Token Labeling for Training Better Vision Transformers**
*Zihang Jiang, Qibin Hou, Li Yuan, Daquan Zhou, Yujun Shi, Xiaojie Jin, Anran Wang, Jiashi Feng*
NIPS'2021 [[Paper](https://arxiv.org/abs/2104.10858)]
[[Code](https://github.com/zihangJiang/TokenLabeling)]
Token Labeling Framework
* **TokenMix: Rethinking Image Mixing for Data Augmentation in Vision Transformers**
*Jihao Liu, Boxiao Liu, Hang Zhou, Hongsheng Li, Yu Liu*
ECCV'2022 [[Paper](https://arxiv.org/abs/2207.08409)]
[[Code](https://github.com/Sense-X/TokenMix)]
TokenMix Framework
* **TokenMixup: Efficient Attention-guided Token-level Data Augmentation for Transformers**
*Hyeong Kyu Choi, Joonmyung Choi, Hyunwoo J. Kim*
NIPS'2022 [[Paper](https://arxiv.org/abs/2210.07562)]
[[Code](https://github.com/mlvlab/TokenMixup)]
TokenMixup Framework
* **MixPro: Data Augmentation with MaskMix and Progressive Attention Labeling for Vision Transformer**
*Qihao Zhao, Yangyu Huang, Wei Hu, Fan Zhang, Jun Liu*
ICLR'2023 [[Paper](https://openreview.net/forum?id=dRjWsd3gwsm)]
[[Code](https://github.com/fistyee/MixPro)]
MixPro Framework
* **Token-Label Alignment for Vision Transformers**
*Han Xiao, Wenzhao Zheng, Zheng Zhu, Jie Zhou, Jiwen Lu*
ICCV'2023 [[Paper](https://arxiv.org/abs/2210.06455)]
[[Code](https://github.com/Euphoria16/TL-Align)]
TL-Align Framework
### **Saliency Token**
* **SnapMix: Semantically Proportional Mixing for Augmenting Fine-grained Data**
*Shaoli Huang, Xinchao Wang, Dacheng Tao*
AAAI'2021 [[Paper](https://arxiv.org/abs/2012.04846)]
[[Code](https://github.com/Shaoli-Huang/SnapMix)]
SnapMix Framework
* **Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated Label Mixing**
*Joonhyung Park, June Yong Yang, Jinwoo Shin, Sung Ju Hwang, Eunho Yang*
AAAI'2022 [[Paper](https://arxiv.org/abs/2112.08796)]
Saliency Grafting Framework
## Self-Supervised Learning
### **Contrastive Learning**
* **MixCo: Mix-up Contrastive Learning for Visual Representation**
*Sungnyun Kim, Gihun Lee, Sangmin Bae, Se-Young Yun*
NIPSW'2020 [[Paper](https://arxiv.org/abs/2010.06300)]
[[Code](https://github.com/Lee-Gihun/MixCo-Mixup-Contrast)]
MixCo Framework
* **Hard Negative Mixing for Contrastive Learning**
*Yannis Kalantidis, Mert Bulent Sariyildiz, Noe Pion, Philippe Weinzaepfel, Diane Larlus*
NIPS'2020 [[Paper](https://arxiv.org/abs/2010.01028)]
[[Code](https://europe.naverlabs.com/mochi)]
MoCHi Framework
* **i-Mix A Domain-Agnostic Strategy for Contrastive Representation Learning**
*Kibok Lee, Yian Zhu, Kihyuk Sohn, Chun-Liang Li, Jinwoo Shin, Honglak Lee*
ICLR'2021 [[Paper](https://arxiv.org/abs/2010.08887)]
[[Code](https://github.com/kibok90/imix)]
i-Mix Framework
* **Beyond Single Instance Multi-view Unsupervised Representation Learning**
*Xiangxiang Chu, Xiaohang Zhan, Xiaolin Wei*
BMVC'2022 [[Paper](https://arxiv.org/abs/2011.13356)]
BSIM Framework
* **Improving Contrastive Learning by Visualizing Feature Transformation**
*Rui Zhu, Bingchen Zhao, Jingen Liu, Zhenglong Sun, Chang Wen Chen*
ICCV'2021 [[Paper](https://arxiv.org/abs/2108.02982)]
[[Code](https://github.com/DTennant/CL-Visualizing-Feature-Transformation)]
FT Framework
* **Mix-up Self-Supervised Learning for Contrast-agnostic Applications**
*Yichen Zhang, Yifang Yin, Ying Zhang, Roger Zimmermann*
ICME'2021 [[Paper](https://arxiv.org/abs/2204.00901)]
MixSSL Framework
* **Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning**
*Yifan Zhang, Bryan Hooi, Dapeng Hu, Jian Liang, Jiashi Feng*
NIPS'2021 [[Paper](https://arxiv.org/abs/2102.06605)]
[[Code](https://github.com/Vanint/Core-tuning)]
Co-Tuning Framework
* **Center-wise Local Image Mixture For Contrastive Representation Learning**
*Hao Li, Xiaopeng Zhang, Hongkai Xiong*
BMVC'2021 [[Paper](https://arxiv.org/abs/2011.02697)]
CLIM Framework
* **Piecing and Chipping: An effective solution for the information-erasing view generation in Self-supervised Learning**
*Jingwei Liu, Yi Gu, Shentong Mo, Zhun Sun, Shumin Han, Jiafeng Guo, Xueqi Cheng*
OpenReview'2021 [[Paper](https://openreview.net/forum?id=DnG8f7gweH4)]
PCEA Framework
* **Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup**
*Siyuan Li, Zicheng Liu, Di Wu, Zihan Liu, Stan Z. Li*
arXiv'2021 [[Paper](https://arxiv.org/abs/2111.15454)]
[[Code](https://github.com/Westlake-AI/openmixup)]
SAMix Framework
* **MixSiam: A Mixture-based Approach to Self-supervised Representation Learning**
*Xiaoyang Guo, Tianhao Zhao, Yutian Lin, Bo Du*
OpenReview'2021 [[Paper](https://arxiv.org/abs/2111.02679)]
MixSiam Framework
* **Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Background Mixing**
*Aadarsh Sahoo, Rutav Shah, Rameswar Panda, Kate Saenko, Abir Das*
NIPS'2021 [[Paper](https://arxiv.org/abs/2011.02697)]
[[Code](https://cvir.github.io/projects/comix)]
CoMix Framework
* **Un-Mix: Rethinking Image Mixtures for Unsupervised Visual Representation**
*Zhiqiang Shen, Zechun Liu, Zhuang Liu, Marios Savvides, Trevor Darrell, Eric Xing*
AAAI'2022 [[Paper](https://arxiv.org/abs/2003.05438)]
[[Code](https://github.com/szq0214/Un-Mix)]
Un-Mix Framework
* **m-Mix: Generating Hard Negatives via Multi-sample Mixing for Contrastive Learning**
*Shaofeng Zhang, Meng Liu, Junchi Yan, Hengrui Zhang, Lingxiao Huang, Pinyan Lu, Xiaokang Yang*
KDD'2022 [[Paper](https://sherrylone.github.io/assets/KDD22_M-Mix.pdf)]
[[Code](https://github.com/Sherrylone/m-mix)]
m-Mix Framework
* **A Simple Data Mixing Prior for Improving Self-Supervised Learning**
*Sucheng Ren, Huiyu Wang, Zhengqi Gao, Shengfeng He, Alan Yuille, Yuyin Zhou, Cihang Xie*
CVPR'2022 [[Paper](https://arxiv.org/abs/2206.07692)]
[[Code](https://github.com/oliverrensu/sdmp)]
SDMP Framework
* **CropMix: Sampling a Rich Input Distribution via Multi-Scale Cropping**
*Junlin Han, Lars Petersson, Hongdong Li, Ian Reid*
arXiv'2022 [[Paper](https://arxiv.org/abs/2205.15955)]
[[Code](https://github.com/JunlinHan/CropMix)]
CropMix Framework
* **Mixing up contrastive learning: Self-supervised representation learning for time series**
*Kristoffer Wickstrøm, Michael Kampffmeyer, Karl Øyvind Mikalsen, Robert Jenssen*
PR Letter'2022 [[Paper](https://www.sciencedirect.com/science/article/pii/S0167865522000502)]
MCL Framework
* **Towards Domain-Agnostic Contrastive Learning**
*Vikas Verma, Minh-Thang Luong, Kenji Kawaguchi, Hieu Pham, Quoc V. Le*
ICML'2021 [[Paper](https://arxiv.org/abs/2011.04419)]
DACL Framework
* **ProGCL: Rethinking Hard Negative Mining in Graph Contrastive Learning**
*Jun Xia, Lirong Wu, Ge Wang, Jintao Chen, Stan Z.Li*
ICML'2022 [[Paper](https://arxiv.org/abs/2110.02027)]
[[Code](https://github.com/junxia97/ProGCL)]
ProGCL Framework
* **Evolving Image Compositions for Feature Representation Learning**
*Paola Cascante-Bonilla, Arshdeep Sekhon, Yanjun Qi, Vicente Ordonez*
BMVC'2021 [[Paper](https://arxiv.org/abs/2106.09011)]
PatchMix Framework
* **On the Importance of Asymmetry for Siamese Representation Learning**
*Xiao Wang, Haoqi Fan, Yuandong Tian, Daisuke Kihara, Xinlei Chen*
CVPR'2022 [[Paper](https://arxiv.org/abs/2204.00613)]
[[Code](https://github.com/facebookresearch/asym-siam)]
ScaleMix Framework
* **Geodesic Multi-Modal Mixup for Robust Fine-Tuning**
*Changdae Oh, Junhyuk So, Hoyoon Byun, YongTaek Lim, Minchul Shin, Jong-June Jeon, Kyungwoo Song*
NIPS'2023 [[Paper](https://arxiv.org/abs/2203.03897)]
[[Code](https://github.com/changdaeoh/multimodal-mixup)]
m2-Mix Framework
### **Masked Image Modeling**
* **i-MAE: Are Latent Representations in Masked Autoencoders Linearly Separable**
*Kevin Zhang, Zhiqiang Shen*
arXiv'2022 [[Paper](https://arxiv.org/abs/2210.11470)]
[[Code](https://github.com/vision-learning-acceleration-lab/i-mae)]
i-MAE Framework
* **MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers**
*Jihao Liu, Xin Huang, Jinliang Zheng, Yu Liu, Hongsheng Li*
CVPR'2023 [[Paper](https://arxiv.org/abs/2205.13137)]
[[Code](https://github.com/Sense-X/MixMIM)]
MixMAE Framework
* **Mixed Autoencoder for Self-supervised Visual Representation Learning**
*Kai Chen, Zhili Liu, Lanqing Hong, Hang Xu, Zhenguo Li, Dit-Yan Yeung*
CVPR'2023 [[Paper](https://arxiv.org/abs/2303.17152)]
MixedAE Framework
### Semi-Supervised Learning
* **MixMatch: A Holistic Approach to Semi-Supervised Learning**
*David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, Colin Raffel*
NIPS'2019 [[Paper](https://arxiv.org/abs/1905.02249)]
[[Code](https://github.com/google-research/mixmatch)]
MixMatch Framework
* **ReMixMatch: Semi-Supervised Learning with Distribution Matching and Augmentation Anchoring**
*David Berthelot, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, Colin Raffel*
ICLR'2020 [[Paper](https://openreview.net/forum?id=HklkeR4KPB)]
[[Code](https://github.com/google-research/remixmatch)]
ReMixMatch Framework
* **DivideMix: Learning with Noisy Labels as Semi-supervised Learning**
*Junnan Li, Richard Socher, Steven C.H. Hoi*
ICLR'2020 [[Paper](https://arxiv.org/abs/2002.07394)]
[[Code](https://github.com/LiJunnan1992/DivideMix)]
DivideMix Framework
* **MixPUL: Consistency-based Augmentation for Positive and Unlabeled Learning**
*Tong Wei, Feng Shi, Hai Wang, Wei-Wei Tu. Yu-Feng Li*
arXiv'2020 [[Paper](https://arxiv.org/abs/2004.09388)]
MixPUL Framework
* **Milking CowMask for Semi-Supervised Image Classification**
*Geoff French, Avital Oliver, Tim Salimans*
NIPS'2020 [[Paper](https://arxiv.org/abs/2003.12022)]
[[Code](https://github.com/google-research/google-research/tree/master/milking_cowmask)]
CowMask Framework
* **Epsilon Consistent Mixup: Structural Regularization with an Adaptive Consistency-Interpolation Tradeoff**
*Vincent Pisztora, Yanglan Ou, Xiaolei Huang, Francesca Chiaromonte, Jia Li*
arXiv'2021 [[Paper](https://arxiv.org/abs/2104.09452)]
Epsilon Consistent Mixup (ϵmu) Framework
* **Who Is Your Right Mixup Partner in Positive and Unlabeled Learning**
*Changchun Li, Ximing Li, Lei Feng, Jihong Ouyang*
ICLR'2021 [[Paper](https://openreview.net/forum?id=NH29920YEmj)]
P3Mix Framework
* **Interpolation Consistency Training for Semi-Supervised Learning**
*Vikas Verma, Kenji Kawaguchi, Alex Lamb, Juho Kannala, Arno Solin, Yoshua Bengio, David Lopez-Paz*
NN'2022 [[Paper](https://arxiv.org/abs/1903.03825)]
ICT Framework
* **Dual-Decoder Consistency via Pseudo-Labels Guided Data Augmentation for Semi-Supervised Medical Image Segmentation**
*Yuanbin Chen, Tao Wang, Hui Tang, Longxuan Zhao, Ruige Zong, Tao Tan, Xinlin Zhang, Tong Tong*
arXiv'2023 [[Paper](https://arxiv.org/abs/2308.16573)]
DCPA Framework
* **MUM: Mix Image Tiles and UnMix Feature Tiles for Semi-Supervised Object Detection**
*JongMok Kim, Jooyoung Jang, Seunghyeon Seo, Jisoo Jeong, Jongkeun Na, Nojun Kwak*
CVPR'2022 [[Paper](https://user-images.githubusercontent.com/44519745/225082975-4143e7f5-8873-433c-ab6f-6caa615f7120.png)]
[[Code](https://github.com/jongmokkim/mix-unmix)]
MUM Framework
* **Harnessing Hard Mixed Samples with Decoupled Regularizer**
*Zicheng Liu, Siyuan Li, Ge Wang, Cheng Tan, Lirong Wu, Stan Z. Li*
NIPS'2023 [[Paper](https://arxiv.org/abs/2203.10761)]
[[Code](https://github.com/Westlake-AI/openmixup)]
DFixMatch Framework
* **Manifold DivideMix: A Semi-Supervised Contrastive Learning Framework for Severe Label Noise**
*Fahimeh Fooladgar, Minh Nguyen Nhat To, Parvin Mousavi, Purang Abolmaesumi*
arXiv'2023 [[Paper](https://arxiv.org/abs/2308.06861)]
[[Code](https://github.com/Fahim-F/ManifoldDivideMix)]
MixEMatch Framework
* **LaserMix for Semi-Supervised LiDAR Semantic Segmentation**
*Lingdong Kong, Jiawei Ren, Liang Pan, Ziwei Liu*
CVPR'2023 [[Paper](https://arxiv.org/abs/2207.00026)]
[[Code](https://github.com/ldkong1205/LaserMix)] [[project](https://ldkong.com/LaserMix)]
LaserMix Framework
* **PCLMix: Weakly Supervised Medical Image Segmentation via Pixel-Level Contrastive Learning and Dynamic Mix Augmentation**
*Yu Lei, Haolun Luo, Lituan Wang, Zhenwei Zhang, Lei Zhang*
arXiv'2024 [[Paper](https://arxiv.org/abs/2405.06288)]
[[Code](https://github.com/Torpedo2648/PCLMix)]
PCLMix Framework
## CV Downstream Tasks
### **Regression**
* **RegMix: Data Mixing Augmentation for Regression**
*Seong-Hyeon Hwang, Steven Euijong Whang*
arXiv'2021 [[Paper](https://arxiv.org/abs/2106.03374)]
MixRL Framework
* **C-Mixup: Improving Generalization in Regression**
*Huaxiu Yao, Yiping Wang, Linjun Zhang, James Zou, Chelsea Finn*
NIPS'2022 [[Paper](https://arxiv.org/abs/2210.05775)]
[[Code](https://github.com/huaxiuyao/C-Mixup)]
C-Mixup Framework
* **ExtraMix: Extrapolatable Data Augmentation for Regression using Generative Models**
*Kisoo Kwon, Kuhwan Jeong, Sanghyun Park, Sangha Park, Hoshik Lee, Seung-Yeon Kwak, Sungmin Kim, Kyunghyun Cho*
OpenReview'2022 [[Paper](https://openreview.net/forum?id=NgEuFT-SIgI)]
SupReMix Framework
* **Rank-N-Contrast: Learning Continuous Representations for Regression**
*Kaiwen Zha, Peng Cao, Jeany Son, Yuzhe Yang, Dina Katabi*
NIPS'2023 [[Paper](https://arxiv.org/abs/2210.01189)]
[[Code](https://github.com/kaiwenzha/Rank-N-Contrast)]* **Anchor Data Augmentation**
*Nora Schneider, Shirin Goshtasbpour, Fernando Perez-Cruz*
NIPS'2023 [[Paper](https://arxiv.org/abs/2311.06965)]* **Mixup Your Own Pairs**
*Yilei Wu, Zijian Dong, Chongyao Chen, Wangchunshu Zhou, Juan Helen Zhou*
arXiv'2023 [[Paper](https://arxiv.org/abs/2309.16633)]
[[Code](https://github.com/yilei-wu/supremix)]
SupReMix Framework
* **Tailoring Mixup to Data using Kernel Warping functions**
*Quentin Bouniot, Pavlo Mozharovskyi, Florence d'Alché-Buc*
arXiv'2023 [[Paper](https://arxiv.org/abs/2311.01434)]
[[Code](https://github.com/ENSTA-U2IS/torch-uncertainty)]
Warped Mixup Framework
* **OmniMixup: Generalize Mixup with Mixing-Pair Sampling Distribution**
*Anonymous*
Openreview'2023 [[Paper](https://openreview.net/forum?id=6Uc7Fgwrsm)]* **Augment on Manifold: Mixup Regularization with UMAP**
*Yousef El-Laham, Elizabeth Fons, Dillon Daudert, Svitlana Vyetrenko*
ICASSP'2024 [[Paper](https://arxiv.org/abs/2210.01189)]### **Long tail distribution**
* **Remix: Rebalanced Mixup**
*Hsin-Ping Chou, Shih-Chieh Chang, Jia-Yu Pan, Wei Wei, Da-Cheng Juan*
ECCVW'2020 [[Paper](https://arxiv.org/abs/2007.03943)]
Remix Framework
* **Towards Calibrated Model for Long-Tailed Visual Recognition from Prior Perspective**
*Zhengzhuo Xu, Zenghao Chai, Chun Yuan*
NIPS'2021 [[Paper](https://arxiv.org/abs/2111.03874)]
[[Code](https://github.com/XuZhengzhuo/Prior-LT)]
UniMix Framework
* **Label-Occurrence-Balanced Mixup for Long-tailed Recognition**
*Shaoyu Zhang, Chen Chen, Xiujuan Zhang, Silong Peng*
ICASSP'2022 [[Paper](https://arxiv.org/abs/2110.04964)]
OBMix Framework
* **DBN-Mix: Training Dual Branch Network Using Bilateral Mixup Augmentation for Long-Tailed Visual Recognition**
*Jae Soon Baik, In Young Yoon, Jun Won Choi*
PR'2024 [[Paper](https://arxiv.org/abs/2110.04964)]
DBN-Mix Framework
### **Segmentation**
* **ClassMix: Segmentation-Based Data Augmentation for Semi-Supervised Learning**
*Viktor Olsson, Wilhelm Tranheden, Juliano Pinto, Lennart Svensson*
WACV'2021 [[Paper](https://arxiv.org/abs/2007.07936)]
[[Code](https://github.com/WilhelmT/ClassMix)]
ClassMix Framework
* **ChessMix: Spatial Context Data Augmentation for Remote Sensing Semantic Segmentation**
*Matheus Barros Pereira, Jefersson Alex dos Santos*
SIBGRAPI'2021 [[Paper](https://arxiv.org/abs/2108.11535)]
ChessMix Framework
* **CycleMix: A Holistic Strategy for Medical Image Segmentation from Scribble Supervision**
*Ke Zhang, Xiahai Zhuang*
CVPR'2022 [[Paper](https://arxiv.org/abs/2203.01475)]
[[Code](https://github.com/BWGZK/CycleMix)]
CyclesMix Framework
* **InsMix: Towards Realistic Generative Data Augmentation for Nuclei Instance Segmentation**
*Yi Lin, Zeyu Wang, Kwang-Ting Cheng, Hao Chen*
MICCAI'2022 [[Paper](https://arxiv.org/abs/2206.15134)]
[[Code](https://github.com/hust-linyi/insmix)]
InsMix Framework
* **LaserMix for Semi-Supervised LiDAR Semantic Segmentation**
*Lingdong Kong, Jiawei Ren, Liang Pan, Ziwei Liu*
CVPR'2023 [[Paper](https://arxiv.org/abs/2207.00026)]
[[Code](https://github.com/ldkong1205/LaserMix)] [[project](https://ldkong.com/LaserMix)]
LaserMix Framework
* **Dual-Decoder Consistency via Pseudo-Labels Guided Data Augmentation for Semi-Supervised Medical Image Segmentation**
*Yuanbin Chen, Tao Wang, Hui Tang, Longxuan Zhao, Ruige Zong, Tao Tan, Xinlin Zhang, Tong Tong*
arXiv'2023 [[Paper](https://arxiv.org/abs/2308.16573)]
DCPA Framework
* **SA-MixNet: Structure-aware Mixup and Invariance Learning for Scribble-supervised Road Extraction in Remote Sensing Images**
*Jie Feng, Hao Huang, Junpeng Zhang, Weisheng Dong, Dingwen Zhang, Licheng Jiao*
arXiv'2024 [[Paper](https://arxiv.org/abs/2403.01381)]
[[Code](https://github.com/xdu-jjgs/SA-MixNet-for-Scribble-based-Road-Extraction)]
SA-MixNet Framework
* **Constructing and Exploring Intermediate Domains in Mixed Domain Semi-supervised Medical Image Segmentation**
*Qinghe Ma, Jian Zhang, Lei Qi, Qian Yu, Yinghuan Shi, Yang Gao*
CVPR'2024 [[Paper](https://arxiv.org/abs/2404.08951)]
[[Code](https://github.com/MQinghe/MiDSS)]
MiDSS Framework
* **UniMix: Towards Domain Adaptive and Generalizable LiDAR Semantic Segmentation in Adverse Weather**
*Haimei Zhao, Jing Zhang, Zhuo Chen, Shanshan Zhao, Dacheng Tao*
CVPR'2024 [[Paper](https://arxiv.org/abs/2404.05145)]
[[Code](https://github.com/sunnyHelen/UniMix)]* **ModelMix: A Holistic Strategy for Medical Image Segmentation from Scribble Supervision**
*Ke Zhang, Vishal M. Patel*
MICCAI'2024 [[Paper](https://arxiv.org/abs/2406.13237)]
ModelMix Framework
### **Object Detection**
* **MUM: Mix Image Tiles and UnMix Feature Tiles for Semi-Supervised Object Detection**
*JongMok Kim, Jooyoung Jang, Seunghyeon Seo, Jisoo Jeong, Jongkeun Na, Nojun Kwak*
CVPR'2022 [[Paper](https://arxiv.org/abs/2111.10958)]
[[Code](https://github.com/jongmokkim/mix-unmix)]
MUM Framework
* **Mixed Pseudo Labels for Semi-Supervised Object Detection**
*Zeming Chen, Wenwei Zhang, Xinjiang Wang, Kai Chen, Zhi Wang*
arXiv'2023 [[Paper](https://arxiv.org/abs/2312.07006)]
[[Code](https://github.com/czm369/mixpl)]
MixPL Framework
* **MS-DETR: Efficient DETR Training with Mixed Supervision**
*Chuyang Zhao, Yifan Sun, Wenhao Wang, Qiang Chen, Errui Ding, Yi Yang, Jingdong Wang*
arXiv'2024 [[Paper](https://arxiv.org/abs/2401.03989)]
[[Code](https://github.com/Atten4Vis/MS-DETR)]
MS-DETR Framework
## Other Applications
## Training Paradigms
### **Federated Learning**
* **XOR Mixup: Privacy-Preserving Data Augmentation for One-Shot Federated Learning**
*MyungJae Shin, Chihoon Hwang, Joongheon Kim, Jihong Park, Mehdi Bennis, Seong-Lyun Kim*
ICML'2020 [[Paper](https://arxiv.org/abs/2111.05073)]
[[Code](https://github.com/ihooni/XOR-Mixup)]* **FedMix: Approximation of Mixup Under Mean augmented Federated Learning**
*Tehrim Yoon, Sumin Shin, Sung Ju Hwang, Eunho Yang*
ECCV'2022 [[Paper](https://arxiv.org/abs/2107.00233)]
[[Code](https://github.com/DevPranjal/fedmix)]* **Mix2FLD: Downlink Federated Learning After Uplink Federated Distillation With Two-Way Mixup**
*Seungeun Oh, Jihong Park, Eunjeong Jeong, Hyesung Kim, Mehdi Bennis, Seong-Lyun Kim*
IEEE Communications Letters'2020 [[Paper](https://ieeexplore.ieee.org/document/9121290)]* **StatMix: Data augmentation method that relies on image statistics in federated learning**
*Dominik Lewy, Jacek Mańdziuk, Maria Ganzha, Marcin Paprzycki*
ICONIP'2022 [[Paper](https://link.springer.com/chapter/10.1007/978-981-99-1639-9_48)]### **Adversarial Attack and Adversarial Training**
* **Addressing Neural Network Robustness with Mixup and Targeted Labeling Adversarial Training**
*Alfred Laugros, Alice Caplier, Matthieu Ospici*
ECCV'2020 [[Paper](https://arxiv.org/abs/2008.08384)]* **Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks**
*Tianyu Pang, Kun Xu, Jun Zhu*
ICLR'2020 [[Paper](https://arxiv.org/abs/1909.11515)]
[[Code](https://github.com/P2333/Mixup-Inference)]* **Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization**
*Saehyung Lee, Hyungyu Lee, Sungroh Yoon*
CVPR'2020 [[Paper](https://arxiv.org/abs/2003.02484)]
[[Code](https://github.com/Saehyung-Lee/cifar10_challenge)]* **Adversarial Mixing Policy for Relaxing Locally Linear Constraints in Mixup**
*Guang Liu, Yuzhao Mao, Hailong Huang, Weiguo Gao, Xuan Li*
EMNLP'2021 [[Paper](https://arxiv.org/abs/2109.07177)]* **Adversarially Optimized Mixup for Robust Classification**
*Jason Bunk, Srinjoy Chattopadhyay, B. S. Manjunath, Shivkumar Chandrasekaran*
arXiv'2021 [[Paper](https://arxiv.org/abs/2103.11589)]* **Better Robustness by More Coverage: Adversarial and Mixup Data Augmentation for Robust Finetuning**
*Guillaume P. Archambault, Yongyi Mao, Hongyu Guo, Richong Zhang*
ACL'2021 [[Paper](https://aclanthology.org/2021.findings-acl.137/)]* **Interpolated Adversarial Training: Achieving Robust Neural Networks without Sacrificing Too Much Accuracy**
*Alex Lamb, Vikas Verma, Kenji Kawaguchi, Alexander Matyasko, Savya Khosla, Juho Kannala, Yoshua Bengio*
NN'2021 [[Paper](https://arxiv.org/abs/1906.06784)]* **Semi-supervised Semantics-guided Adversarial Training for Trajectory Prediction**
*Ruochen Jiao, Xiangguo Liu, Takami Sato, Qi Alfred Chen, Qi Zhu*
ICCV'2023 [[Paper](https://ieeexplore.ieee.org/document/10376952)]* **Mixup as directional adversarial training**
*Guillaume P. Archambault, Yongyi Mao, Hongyu Guo, Richong Zhang*
NIPS'2019 [[Paper](https://arxiv.org/abs/1906.06875)]
[[Code](https://github.com/mixupAsDirectionalAdversarial/mixup_as_dat)]* **On the benefits of defining vicinal distributions in latent space**
*Puneet Mangla, Vedant Singh, Shreyas Jayant Havaldar, Vineeth N Balasubramanian*
CVPRW'2021 [[Paper](https://arxiv.org/abs/2003.06566)]### **Domain Adaption**
* **Mix-up Self-Supervised Learning for Contrast-agnostic Applications**
*Yichen Zhang, Yifang Yin, Ying Zhang, Roger Zimmermann*
ICDE'2022 [[Paper](https://arxiv.org/abs/2204.00901)]* **Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Background Mixing**
*Aadarsh Sahoo, Rutav Shah, Rameswar Panda, Kate Saenko, Abir Das*
NIPS'2021 [[Paper](https://arxiv.org/abs/2110.15128)]
[[Code](https://github.com/CVIR/CoMix)]* **Virtual Mixup Training for Unsupervised Domain Adaptation**
*Xudong Mao, Yun Ma, Zhenguo Yang, Yangbin Chen, Qing Li*
arXiv'2019 [[Paper](https://arxiv.org/abs/1905.04215)]
[[Code](https://github.com/xudonmao/VMT)]* **Improve Unsupervised Domain Adaptation with Mixup Training**
*Shen Yan, Huan Song, Nanxiang Li, Lincan Zou, Liu Ren*
arXiv'2020 [[Paper](https://arxiv.org/abs/2001.00677)]* **Adversarial Domain Adaptation with Domain Mixup**
*Minghao Xu, Jian Zhang, Bingbing Ni, Teng Li, Chengjie Wang, Qi Tian, Wenjun Zhang*
AAAI'2020 [[Paper](https://arxiv.org/abs/1912.01805)]
[[Code](https://github.com/ChrisAllenMing/Mixup_for_UDA)]* **Dual Mixup Regularized Learning for Adversarial Domain Adaptation**
*Yuan Wu, Diana Inkpen, Ahmed El-Roby*
ECCV'2020 [[Paper](https://arxiv.org/abs/2007.03141)]
[[Code](https://github.com/YuanWu3/Dual-Mixup-Regularized-Learning-for-Adversarial-Domain-Adaptation)]* **Select, Label, and Mix: Learning Discriminative Invariant Feature Representations for Partial Domain Adaptation**
*Aadarsh Sahoo, Rameswar Panda, Rogerio Feris, Kate Saenko, Abir Das*
WACV'2023 [[Paper](https://arxiv.org/abs/2012.03358)]
[[Code](https://github.com/CVIR/SLM)]* **Spectral Adversarial MixUp for Few-Shot Unsupervised Domain Adaptation**
*Jiajin Zhang, Hanqing Chao, Amit Dhurandhar, Pin-Yu Chen, Ali Tajer, Yangyang Xu, Pingkun Yan*
MICCAI'2023 [[Paper](https://arxiv.org/abs/2309.01207)]
[[Code](https://github.com/RPIDIAL/SAMix)]### **Knowledge Distillation**
* **MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps**
*Muhammad Awais, Fengwei Zhou, Chuanlong Xie, Jiawei Li, Sung-Ho Bae, Zhenguo Li*
NIPS'2021 [[Paper](https://arxiv.org/abs/2111.05073)]* **MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition**
*Chuanguang Yang, Zhulin An, Helong Zhou, Linhang Cai, Xiang Zhi, Jiwen Wu, Yongjun Xu, Qian Zhang*
ECCV'2022 [[Paper](https://arxiv.org/abs/2208.05768)]* **Understanding the Role of Mixup in Knowledge Distillation: An Empirical Study**
*Chuanguang Yang, Zhulin An, Helong Zhou, Linhang Cai, Xiang Zhi, Jiwen Wu, Yongjun Xu, Qian Zhang*
WACV'2023 [[Paper](https://arxiv.org/abs/2211.03946)]### **Multi-Modal**
* **MixGen: A New Multi-Modal Data Augmentation**
*Xiaoshuai Hao, Yi Zhu, Srikar Appalaraju, Aston Zhang, Wanqian Zhang, Bo Li, Mu Li*
arXiv'2023 [[Paper](https://arxiv.org/abs/2206.08358)]
[[Code](https://github.com/amazon-research/mix-generation)]* **VLMixer: Unpaired Vision-Language Pre-training via Cross-Modal CutMix**
*Teng Wang, Wenhao Jiang, Zhichao Lu, Feng Zheng, Ran Cheng, Chengguo Yin, Ping Luo*
ICML'2022 [[Paper](https://arxiv.org/abs/2206.08919)]
VLMixer Framework
* **Geodesic Multi-Modal Mixup for Robust Fine-Tuning**
*Changdae Oh, Junhyuk So, Hoyoon Byun, YongTaek Lim, Minchul Shin, Jong-June Jeon, Kyungwoo Song*
NIPS'2023 [[Paper](https://arxiv.org/abs/2203.03897)]
[[Code](https://github.com/changdaeoh/multimodal-mixup)]* **PowMix: A Versatile Regularizer for Multimodal Sentiment Analysis**
*Efthymios Georgiou, Yannis Avrithis, Alexandros Potamianos*
arXiv'2023 [[Paper](https://arxiv.org/abs/2312.12334)]
PowMix Framework
* **Enhance image classification via inter-class image mixup with diffusion model**
*Efthymios Georgiou, Yannis Avrithis, Alexandros Potamianos*
CVPR'2024 [[Paper](https://arxiv.org/abs/2403.19600)]
[[Code](https://github.com/Zhicaiwww/Diff-Mix)]* **Frequency-Enhanced Data Augmentation for Vision-and-Language Navigation**
*Keji He, Chenyang Si, Zhihe Lu, Yan Huang, Liang Wang, Xinchao Wang*
NIPS'2023 [[Paper](https://proceedings.neurips.cc/paper_files/paper/2023/file/0d9e08f247ca7fbbfd5e50b7ff9cf357-Paper-Conference.pdf)]
[[Code](https://github.com/hekj/FDA)]
## Beyond Vision
### **NLP**
* **Augmenting Data with Mixup for Sentence Classification: An Empirical Study**
*Hongyu Guo, Yongyi Mao, Richong Zhang*
arXiv'2019 [[Paper](https://arxiv.org/abs/1905.08941)]
[[Code](https://github.com/dsfsi/textaugment)]* **Adversarial Mixing Policy for Relaxing Locally Linear Constraints in Mixup**
*Guang Liu, Yuzhao Mao, Hailong Huang, Weiguo Gao, Xuan Li*
EMNLP'2021 [[Paper](https://arxiv.org/abs/2109.07177)]* **SeqMix: Augmenting Active Sequence Labeling via Sequence Mixup**
*Hongyu Guo, Yongyi Mao, Richong Zhang*
EMNLP'2020 [[Paper](https://aclanthology.org/2020.emnlp-main.691/)]
[[Code](https://github.com/rz-zhang/SeqMix)]* **Mixup-Transformer: Dynamic Data Augmentation for NLP Tasks**
*Lichao Sun, Congying Xia, Wenpeng Yin, Tingting Liang, Philip S. Yu, Lifang He*
COLING'2020 [[Paper](https://arxiv.org/abs/2010.02394)]* **Calibrated Language Model Fine-Tuning for In- and Out-of-Distribution Data**
*Lingkai Kong, Haoming Jiang, Yuchen Zhuang, Jie Lyu, Tuo Zhao, Chao Zhang*
EMNLP'2020 [[Paper](https://arxiv.org/abs/2010.11506)]
[[Code](https://github.com/Lingkai-Kong/Calibrated-BERT-Fine-Tuning)]* **Augmenting NLP Models using Latent Feature Interpolations**
*Amit Jindal, Arijit Ghosh Chowdhury, Aniket Didolkar, Di Jin, Ramit Sawhney, Rajiv Ratn Shah*
COLING'2020 [[Paper](https://aclanthology.org/2020.coling-main.611/)]* **MixText: Linguistically-informed Interpolation of Hidden Space for Semi-Supervised Text Classification**
*Jiaao Chen, Zichao Yang, Diyi Yang*
ACL'2020 [[Paper](https://arxiv.org/abs/2004.12239)]
[[Code](https://github.com/GT-SALT/MixText)]* **Sequence-Level Mixed Sample Data Augmentation**
*Jiaao Chen, Zichao Yang, Diyi Yang*
EMNLP'2020 [[Paper](https://aclanthology.org/2020.emnlp-main.447/)]
[[Code](https://github.com/dguo98/seqmix)]* **AdvAug: Robust Adversarial Augmentation for Neural Machine Translation**
*Yong Cheng, Lu Jiang, Wolfgang Macherey, Jacob Eisenstein*
ACL'2020 [[Paper](https://aclanthology.org/2020.acl-main.529.pdf)]
[[Code](https://github.com/dguo98/seqmix)]* **Local Additivity Based Data Augmentation for Semi-supervised NER**
*Jiaao Chen, Zhenghui Wang, Ran Tian, Zichao Yang, Diyi Yang*
ACL'2020 [[Paper](https://aclanthology.org/2020.emnlp-main.95/)]
[[Code](https://github.com/SALT-NLP/LADA)]* **Mixup Decoding for Diverse Machine Translation**
*Jicheng Li, Pengzhi Gao, Xuanfu Wu, Yang Feng, Zhongjun He, Hua Wu, Haifeng Wang*
EMNLP'2021 [[Paper](https://arxiv.org/abs/2109.03402)]* **TreeMix: Compositional Constituency-based Data Augmentation for Natural Language Understanding**
*Le Zhang, Zichao Yang, Diyi Yang*
NAALC'2022 [[Paper](https://arxiv.org/abs/2205.06153)]
[[Code](https://github.com/magiccircuit/treemix)]* **STEMM: Self-learning with Speech-text Manifold Mixup for Speech Translation**
*Qingkai Fang, Rong Ye, Lei Li, Yang Feng, Mingxuan Wang*
ACL'2022 [[Paper](https://arxiv.org/abs/2010.02394)]
[[Code](https://github.com/ictnlp/STEMM)]* **AdMix: A Mixed Sample Data Augmentation Method for Neural Machine Translation**
*Chang Jin, Shigui Qiu, Nini Xiao, Hao Jia*
IJCAI'2022 [[Paper](https://www.ijcai.org/proceedings/2022/0579.pdf)]* **Enhancing Cross-lingual Transfer by Manifold Mixup**
*Huiyun Yang, Huadong Chen, Hao Zhou, Lei Li*
ICLR'2022 [[Paper](https://arxiv.org/abs/2205.04182)]
[[Code](https://github.com/yhy1117/x-mixup)]* **Multilingual Mix: Example Interpolation Improves Multilingual Neural Machine Translation**
*Yong Cheng, Ankur Bapna, Orhan Firat, Yuan Cao, Pidong Wang, Wolfgang Macherey*
ACL'2022 [[Paper](https://aclanthology.org/2022.acl-long.282/)]### **GNN**
* **Node Augmentation Methods for Graph Neural Network based Object Classification**
*Yifan Xue; Yixuan Liao; Xiaoxin Chen; Jingwei Zhao*
CDS'2021 [[Paper](https://ieeexplore.ieee.org/document/9463199/authors#authors)]* **Mixup for Node and Graph Classification**
*Yiwei Wang, Wei Wang, Yuxuan Liang, Yujun Cai, Bryan Hooi*
WWW'2021 [[Paper](https://dl.acm.org/doi/10.1145/3442381.3449796)]
[[Code](https://github.com/vanoracai/MixupForGraph)]* **Graph Mixed Random Network Based on PageRank**
*Qianli Ma, Zheng Fan, Chenzhi Wang, Hongye Tan*
Symmetry'2022 [[Paper](https://www.mdpi.com/2073-8994/14/8/1678)]* **GraphSMOTE: Imbalanced Node Classification on Graphs with Graph Neural Networks**
*Tianxiang Zhao, Xiang Zhang, Suhang Wang*
WSDM'2021 [[Paper](https://arxiv.org/abs/2103.08826)]* **GraphMix: Improved Training of GNNs for Semi-Supervised Learning**
*Vikas Verma, Meng Qu, Kenji Kawaguchi, Alex Lamb, Yoshua Bengio, Juho Kannala, Jian Tang*
AAAI'2021 [[Paper](https://arxiv.org/abs/1909.11715)]
[[Code](https://github.com/vikasverma1077/GraphMix)]* **GraphMixup: Improving Class-Imbalanced Node Classification on Graphs by Self-supervised Context Prediction**
*Lirong Wu, Haitao Lin, Zhangyang Gao, Cheng Tan, Stan.Z.Li*
ECML-PKDD'2022 [[Paper](https://link.springer.com/chapter/10.1007/978-3-031-19818-2_34)]
[[Code](https://github.com/LirongWu/GraphMixup)]* **Graph Transplant: Node Saliency-Guided Graph Mixup with Local Structure Preservation**
*Joonhyung Park, Hajin Shim, Eunho Yang*
AAAI'2022 [[Paper](https://arxiv.org/abs/2111.05639)]
[[Code](https://github.com/shimazing/Graph-Transplant)]* **G-Mixup: Graph Data Augmentation for Graph Classification**
*Xiaotian Han, Zhimeng Jiang, Ninghao Liu, Xia Hu*
ICML'2022 [[Paper](https://arxiv.org/abs/2202.07179)]* **Fused Gromov-Wasserstein Graph Mixup for Graph-level Classifications**
*Xinyu Ma, Xu Chu, Yasha Wang, Yang Lin, Junfeng Zhao, Liantao Ma, Wenwu Zhu*
NIPS'2023 [[Paper](https://arxiv.org/abs/2306.15963)]
[[code](https://github.com/ArthurLeoM/FGWMixup)]* **iGraphMix: Input Graph Mixup Method for Node Classification**
*Jongwon Jeong, Hoyeop Lee, Hyui Geon Yoon, Beomyoung Lee, Junhee Heo, Geonsoo Kim, Kim Jin Seon*
ICLR'2024 [[Paper](https://openreview.net/forum?id=a2ljjXeDcE)]### **3D Point**
* **PointMixup: Augmentation for Point Clouds**
*Yunlu Chen, Vincent Tao Hu, Efstratios Gavves, Thomas Mensink, Pascal Mettes, Pengwan Yang, Cees G.M. Snoek*
ECCV'2020 [[Paper](https://arxiv.org/abs/2008.06374)]
[[Code](https://github.com/yunlu-chen/PointMixup/)]* **PointCutMix: Regularization Strategy for Point Cloud Classification**
*Jinlai Zhang, Lyujie Chen, Bo Ouyang, Binbin Liu, Jihong Zhu, Yujing Chen, Yanmei Meng, Danfeng Wu*
Neurocomputing'2022 [[Paper](https://arxiv.org/abs/2101.01461)]
[[Code](https://github.com/cuge1995/PointCutMix)]* **Regularization Strategy for Point Cloud via Rigidly Mixed Sample**
*Dogyoon Lee, Jaeha Lee, Junhyeop Lee, Hyeongmin Lee, Minhyeok Lee, Sungmin Woo, Sangyoun Lee*
CVPR'2021 [[Paper](https://arxiv.org/abs/2102.01929)]
[[Code](https://github.com/dogyoonlee/RSMix)]* **Part-Aware Data Augmentation for 3D Object Detection in Point Cloud**
*Jaeseok Choi, Yeji Song, Nojun Kwak*
IROS'2021 [[Paper](https://arxiv.org/abs/2007.13373)]
[[Code](https://github.com/sky77764/pa-aug.pytorch)]* **Point MixSwap: Attentional Point Cloud Mixing via Swapping Matched Structural Divisions**
*Ardian Umam, Cheng-Kun Yang, Yung-Yu Chuang, Jen-Hui Chuang, Yen-Yu Lin*
ECCV'2022 [[Paper](https://link.springer.com/chapter/10.1007/978-3-031-19818-2_34)]
[[Code](https://github.com/ardianumam/PointMixSwap)]
### **Other**
* **Embedding Expansion: Augmentation in Embedding Space for Deep Metric Learning**
*Byungsoo Ko, Geonmo Gu*
CVPR'2020 [[Paper](https://arxiv.org/abs/2003.02546)]
[[Code](https://github.com/clovaai/embedding-expansion)]* **SalfMix: A Novel Single Image-Based Data Augmentation Technique Using a Saliency Map**
*Jaehyeop Choi, Chaehyeon Lee, Donggyu Lee, Heechul Jung*
Sensor'2021 [[Paper](https://pdfs.semanticscholar.org/1db9/c80edeed50858783c69237aeba764750e8b7.pdf?_ga=2.182064935.1813772674.1674154381-1810295069.1625160008)]* **Octave Mix: Data Augmentation Using Frequency Decomposition for Activity Recognition**
*Tatsuhito Hasegawa*
IEEE Access'2021 [[Paper](https://ieeexplore.ieee.org/document/9393911)]* **Guided Interpolation for Adversarial Training**
*Chen Chen, Jingfeng Zhang, Xilie Xu, Tianlei Hu, Gang Niu, Gang Chen, Masashi Sugiyama*
arXiv'2021 [[Paper](https://arxiv.org/abs/2102.07327)]* **Recall@k Surrogate Loss with Large Batches and Similarity Mixup**
*Yash Patel, Giorgos Tolias, Jiri Matas*
CVPR'2022 [[Paper](https://arxiv.org/abs/2108.11179)]
[[Code](https://github.com/yash0307/RecallatK_surrogate)]* **Contrastive-mixup Learning for Improved Speaker Verification**
*Xin Zhang, Minho Jin, Roger Cheng, Ruirui Li, Eunjung Han, Andreas Stolcke*
ICASSP'2022 [[Paper](https://arxiv.org/abs/2202.10672)]* **Noisy Feature Mixup**
*Soon Hoe Lim, N. Benjamin Erichson, Francisco Utrera, Winnie Xu, Michael W. Mahoney*
ICLR'2022 [[Paper](https://arxiv.org/abs/2110.02180)]
[[Code](https://github.com/erichson/NFM)]* **It Takes Two to Tango: Mixup for Deep Metric Learning**
*Shashanka Venkataramanan, Bill Psomas, Ewa Kijak, Laurent Amsaleg, Konstantinos Karantzalos, Yannis Avrithis*
ICLR'2022 [[Paper](https://arxiv.org/abs/2106.04990)]
[[Code](https://github.com/billpsomas/metrix)]* **Representational Continuity for Unsupervised Continual Learning**
*Divyam Madaan, Jaehong Yoon, Yuanchun Li, Yunxin Liu, Sung Ju Hwang*
ICLR'2022 [[Paper](https://arxiv.org/abs/2110.06976)]
[[Code](https://github.com/divyam3897/UCL)]* **Expeditious Saliency-guided Mix-up through Random Gradient Thresholding**
*Remy Sun, Clement Masson, Gilles Henaff, Nicolas Thome, Matthieu Cord.*
ICPR'2022 [[Paper](https://arxiv.org/abs/2205.10158)]* **Guarding Barlow Twins Against Overfitting with Mixed Samples**
*Wele Gedara Chaminda Bandara, Celso M. De Melo, Vishal M. Patel*
arXiv'2023 [[Paper](https://arxiv.org/abs/2312.02151)]
[[Code](https://github.com/wgcban/mix-bt)]* **Infinite Class Mixup**
*Thomas Mensink, Pascal Mettes*
arXiv'2023 [[Paper](https://arxiv.org/abs/2305.10293)]* **Semantic Equivariant Mixup**
*Zongbo Han, Tianchi Xie, Bingzhe Wu, Qinghua Hu, Changqing Zhang*
arXiv'2023 [[Paper](https://arxiv.org/abs/2308.06451)]* **G-Mix: A Generalized Mixup Learning Framework Towards Flat Minima**
*Xingyu Li, Bo Tang*
arXiv'2023 [[Paper](https://arxiv.org/abs/2308.03236)]* **Inter-Instance Similarity Modeling for Contrastive Learning**
*Chengchao Shen, Dawei Liu, Hao Tang, Zhe Qu, Jianxin Wang*
arXiv'2023 [[Paper](https://arxiv.org/abs/2306.12243)]
[[Code](https://github.com/visresearch/patchmix)]* **Single-channel speech enhancement using learnable loss mixup**
*Oscar Chang, Dung N. Tran, Kazuhito Koishida*
arXiv'2023 [[Paper](https://arxiv.org/abs/2312.17255)]* **Selective Volume Mixup for Video Action Recognition**
*Yi Tan, Zhaofan Qiu, Yanbin Hao, Ting Yao, Xiangnan He, Tao Mei*
arXiv'2023 [[Paper](https://arxiv.org/abs/2309.09534)]* **Rethinking Data Augmentation for Image Super-resolution: A Comprehensive Analysis and a New Strategy**
*Jaejun Yoo, Namhyuk Ahn, Kyung-Ah Sohn*
CVPR'2020 & IJCV'2024 [[Paper](https://arxiv.org/abs/2110.06976)]
[[Code](https://github.com/clovaai/cutblur)]* **DNABERT-S: Learning Species-Aware DNA Embedding with Genome Foundation Models**
*Zhihan Zhou, Weimin Wu, Harrison Ho, Jiayi Wang, Lizhen Shi, Ramana V Davuluri, Zhong Wang, Han Liu*
arXiv'2024 [[Paper](https://ieeexplore.ieee.org/document/9156551)]
[[Code](https://github.com/MAGICS-LAB/DNABERT_S)]* **ContextMix: A context-aware data augmentation method for industrial visual inspection systems**
*Hyungmin Kim, Donghun Kim, Pyunghwan Ahn, Sungho Suh, Hansang Cho, Junmo Kim*
EAAI'2024 [[Paper](https://arxiv.org/abs/2401.10050)]* **Robust Image Denoising through Adversarial Frequency Mixup**
*Donghun Ryou, Inju Ha, Hyewon Yoo, Dongwan Kim, Bohyung Han*
CVPR'2024 [[Paper](https://openaccess.thecvf.com/content/CVPR2024/papers/Ryou_Robust_Image_Denoising_through_Adversarial_Frequency_Mixup_CVPR_2024_paper.pdf)]
[[Code](https://github.com/dhryougit/AFM)]## Analysis and Theorem
* **Understanding Mixup Training Methods**
*Daojun Liang, Feng Yang, Tian Zhang, Peter Yang*
NIPS'2019 [[Paper](https://ieeexplore.ieee.org/document/8478159/authors#authors)]* **MixUp as Locally Linear Out-Of-Manifold Regularization**
*Hongyu Guo, Yongyi Mao, Richong Zhang*
AAAI'2019 [[Paper](https://arxiv.org/abs/1809.02499)]* **MixUp as Directional Adversarial Training**
*Chanwoo Park, Sangdoo Yun, Sanghyuk Chun*
NIPS'2019 [[Paper](https://arxiv.org/abs/1906.06875)]* **On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks**
*Sunil Thulasidasan, Gopinath Chennupati, Jeff Bilmes, Tanmoy Bhattacharya, Sarah Michalak*
NIPS'2019 [[Paper](https://arxiv.org/abs/1905.11001)]
[[Code](https://github.com/paganpasta/onmixup)]* **On Mixup Regularization**
*Luigi Carratino, Moustapha Cissé, Rodolphe Jenatton, Jean-Philippe Vert*
arXiv'2020 [[Paper](https://arxiv.org/abs/2006.06049)]* **Mixup Training as the Complexity Reduction**
*Masanari Kimura*
arXiv'2021 [[Paper](https://arxiv.org/abs/1906.06875)]* **How Does Mixup Help With Robustness and Generalization**
*Linjun Zhang, Zhun Deng, Kenji Kawaguchi, Amirata Ghorbani, James Zou*
ICLR'2021 [[Paper](https://arxiv.org/abs/2010.04819)]* **Mixup Without Hesitation**
*Hao Yu, Huanyu Wang, Jianxin Wu*
ICIG'2022 [[Paper](https://arxiv.org/abs/2101.04342)]
[[Code](https://github.com/yuhao318/mwh)]* **RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and Out Distribution Robustness**
*Francesco Pinto, Harry Yang, Ser-Nam Lim, Philip H.S. Torr, Puneet K. Dokania*
NIPS'2022 [[Paper](https://arxiv.org/abs/2206.14502)]
[[Code](https://github.com/FrancescoPinto/RegMixup)]* **A Unified Analysis of Mixed Sample Data Augmentation: A Loss Function Perspective**
*Chanwoo Park, Sangdoo Yun, Sanghyuk Chun*
NIPS'2022 [[Paper](https://arxiv.org/abs/2208.09913)]
[[Code](https://github.com/naver-ai/hmix-gmix)]* **Towards Understanding the Data Dependency of Mixup-style Training**
*Muthu Chidambaram, Xiang Wang, Yuzheng Hu, Chenwei Wu, Rong Ge*
ICLR'2022 [[Paper](https://openreview.net/pdf?id=ieNJYujcGDO)]
[[Code](https://github.com/2014mchidamb/Mixup-Data-Dependency)]* **When and How Mixup Improves Calibration**
*Linjun Zhang, Zhun Deng, Kenji Kawaguchi, James Zou*
ICML'2022 [[Paper](https://arxiv.org/abs/2102.06289)]* **Provable Benefit of Mixup for Finding Optimal Decision Boundaries**
*Junsoo Oh, Chulhee Yun*
ICML'2023 [[Paper](https://chulheeyun.github.io/publication/oh2023provable/)]* **On the Pitfall of Mixup for Uncertainty Calibration**
*Deng-Bao Wang, Lanqing Li, Peilin Zhao, Pheng-Ann Heng, Min-Ling Zhang*
CVPR'2023 [[Paper](https://openaccess.thecvf.com/content/CVPR2023/papers/Wang_On_the_Pitfall_of_Mixup_for_Uncertainty_Calibration_CVPR_2023_paper.pdf)]* **Understanding the Role of Mixup in Knowledge Distillation: An Empirical Study**
*Hongjun Choi, Eun Som Jeon, Ankita Shukla, Pavan Turaga*
WACV'2023 [[Paper](https://arxiv.org/abs/2211.03946)]
[[Code](https://github.com/hchoi71/mix-kd)]* **Over-Training with Mixup May Hurt Generalization**
*Zixuan Liu, Ziqiao Wang, Hongyu Guo, Yongyi Mao*
ICLR'2023 [[Paper](https://openreview.net/forum?id=JmkjrlVE-DG)]* **Analyzing Effects of Mixed Sample Data Augmentation on Model Interpretability**
*Soyoun Won, Sung-Ho Bae, Seong Tae Kim*
arXiv'2023 [[Paper](https://arxiv.org/abs/2303.14608)]* **Selective Mixup Helps with Distribution Shifts, But Not (Only) because of Mixup**
*Damien Teney, Jindong Wang, Ehsan Abbasnejad*
ICML'2024 [[Paper](https://arxiv.org/abs/2305.16817)]* **Pushing Boundaries: Mixup's Influence on Neural Collapse**
*Quinn Fisher, Haoming Meng, Vardan Papyan*
ICLR'2024 [[Paper](https://arxiv.org/abs/2402.06171)]## Survey
* **A survey on Image Data Augmentation for Deep Learning**
*Connor Shorten and Taghi Khoshgoftaar*
Journal of Big Data'2019 [[Paper](https://www.researchgate.net/publication/334279066_A_survey_on_Image_Data_Augmentation_for_Deep_Learning)]* **An overview of mixing augmentation methods and augmentation strategies**
*Dominik Lewy and Jacek Ma ́ndziuk*
Artificial Intelligence Review'2022 [[Paper](https://link.springer.com/article/10.1007/s10462-022-10227-z)]* **Image Data Augmentation for Deep Learning: A Survey**
*Suorong Yang, Weikang Xiao, Mengcheng Zhang, Suhan Guo, Jian Zhao, Furao Shen*
arXiv'2022 [[Paper](https://arxiv.org/abs/2204.08610)]* **A Survey of Mix-based Data Augmentation: Taxonomy, Methods, Applications, and Explainability**
*Chengtai Cao, Fan Zhou, Yurou Dai, Jianping Wang*
arXiv'2022 [[Paper](https://arxiv.org/abs/2212.10888)]
[[Code](https://github.com/ChengtaiCao/Awesome-Mix)]* **A Survey of Automated Data Augmentation for Image Classification: Learning to Compose, Mix, and Generate**
*Tsz-Him Cheung, Dit-Yan Yeung*
IEEE TNNLS'2023 [[Paper](https://ieeexplore.ieee.org/abstract/document/10158722)]* **Survey: Image Mixing and Deleting for Data Augmentation**
*Humza Naveed, Saeed Anwar, Munawar Hayat, Kashif Javed, Ajmal Mian*
EAAI'2024 [[Paper](https://arxiv.org/abs/2106.07085)]* **A Survey on Mixup Augmentations and Beyond**
*Xin Jin, Hongyu Zhu, Siyuan Li, Zedong Wang, Zecheng Liu, Chang Yu, Huafeng Qin, Stan. Z. Li*
arXiv'2024 [[Paper](https://arxiv.org/abs/2409.05202)]## Benchmark
* **OpenMixup: A Comprehensive Mixup Benchmark for Visual Classification**
*Siyuan Li, Zedong Wang, Zicheng Liu, Di Wu, Cheng Tan, Weiyang Jin, Stan Z. Li*
arXiv'2024 [[Paper](https://arxiv.org/abs/2209.04851)]
[[Code](https://github.com/Westlake-AI/openmixup)]## Classification Results on Datasets
**Mixup methods classification results on general datasets: CIFAR10 \ CIFAR100, TinyImageNet, and ImageNet-1K. $(\cdot)$ denotes training epochs based on ResNet18 (R18), ResNet50 (R50), ResNeXt50 (RX50), PreActResNet18 (PreActR18), and Wide-ResNet28 (WRN28-10, WRN28-8).**
| Method | Publish | CIFAR10 | CIFAR100 | CIFAR100 | CIFAR100 | CIFAR100 | CIFAR100 | Tiny-ImageNet| Tiny-ImageNet | ImageNet-1K | ImageNet-1K |
|:--------:|:--------:|:-------:|:---------:|:--------:|:----------:|:----------:|:--------:|:------------:|:-------------:|:-----------:|:-----------:|
| | | R18 | R18 | RX50 | PreActR18 | WRN28-10 | WRN28-8 | R18 | RX50 | R18 | R50 |
| MixUp | ICLR'2018| 96.62(800) | 79.12(800) | 82.10(800) | 78.90(200) | 82.50(200) | 82.82(400) | 63.86(400) | 66.36(400) | 69.98(100) | 77.12(100) |
| CutMix | ICCV'2019| 96.68(800) | 78.17(800) | 78.32(800) | 76.80(1200)| 83.40(200) | 84.45(400) | 65.53(400) | 66.47(400) | 68.95(100) | 77.17(100) |
| Manifold Mixup | ICML'2019 | 96.71(800) | 80.35(800) | 82.88(800) | 79.66(1200) | 81.96(1200) | 83.24(400) | 64.15(400) | 67.30(400) | 69.98(100) | 77.01(100) |
| FMix | arXiv'2020 | 96.18(800) | 79.69(800) | 79.02(800) | 79.85(200) | 82.03(200) | 84.21(400) | 63.47(400) | 65.08(400) | 69.96(100) | 77.19(100) |
| SmoothMix | CVPRW'2020 | 96.17(800) | 78.69(800) | 78.95(800) | - | - | 82.09(400) | - | - | - | 77.66(300) |
| GridMix | PR'2020 | 96.56(800) | 78.72(800) | 78.90(800) | - | - | 84.24(400) | 64.79(400) | - | - | - |
| ResizeMix | arXiv'2020 | 96.76(800) | 80.01(800) | 80.35(800) | - | 85.23(200) | 84.87(400) | 63.47(400) | 65.87(400) | 69.50(100) | 77.42(100) |
| SaliencyMix | ICLR'2021 | 96.20(800) | 79.12(800) | 78.77(800) | 80.31(300) | 83.44(200) | 84.35(400) | 64.60(400) | 66.55(400) | 69.16(100) | 77.14(100) |
| Attentive-CutMix | ICASSP'2020 | 96.63(800)n| 78.91(800) | 80.54(800) | - | - | 84.34(400) | 64.01(400) | 66.84(400) | - | 77.46(100) |
| Saliency Grafting | AAAI'2022 | - | 80.83(800) | 83.10(800) | - | 84.68(300) | - | 64.84(600) | 67.83(400) | - | 77.65(100) |
| PuzzleMix | ICML'2020 | 97.10(800) | 81.13(800) | 82.85(800) | 80.38(1200) | 84.05(200) | 85.02(400) | 65.81(400) | 67.83(400) | 70.12(100) | 77.54(100) |
| Co-Mix | ICLR'2021 | 97.15(800) | 81.17(800) | 82.91(800) | 80.13(300) | - | 85.05(400) | 65.92(400) | 68.02(400) | - | 77.61(100) |
| SuperMix | CVPR'2021 | - | - | - | 79.07(2000) | 93.60(600) | - | - | - | - | 77.60(600) |
| RecursiveMix | NIPS'2022 | - | 81.36(200) | - | 80.58(2000) | - | - | - | - | - | 79.20(300) |
| AutoMix | ECCV'2022 | 97.34(800) | 82.04(800) | 83.64(800) | - | - | 85.18(400) | 67.33(400) | 70.72(400) | 70.50(100) | 77.91(100) |
| SAMix | arXiv'2021 | 97.50(800) | 82.30(800) | 84.42(800) | - | - | 85.50(400) | 68.89(400) | 72.18(400) | 70.83(100) | 78.06(100) |
| AlignMixup | CVPR'2022 | - | - | - | 81.71(2000) | - | - | - | - | - | 78.00(100) |
| MultiMix | NIPS'2023 | - | - | - | 81.82(2000) | - | - | - | - | - | 78.81(300) |
| GuidedMixup | AAAI'2023 | - | - | - | 81.20(300) | 84.02(200) | - | - | - | - | 77.53(100) |
| Catch-up Mix | AAAI'2023 | - | 82.10(400) | 83.56(400) | 82.24(2000) | - | - | 68.84(400) | - | - | 78.71(300) |
| LGCOAMix | TIP'2024 | - | 82.34(800) | 84.11(800) | - | - | - | 68.27(400) | 73.08(400) | - | - |
| AdAutoMix | ICLR'2024 | 97.55(800) | 82.32(800) | 84.42(800) | - | - | 85.32(400) | 69.19(400) | 72.89(400) | 70.86(100) | 78.04(100) |**Mixup methods classification results on ImageNet-1K dataset use ViT-based models: DeiT, Swin Transformer (Swin), Pyramid Vision Transformer (PVT), and ConvNext trained 300 epochs.**
| Method | Publish | ImageNet-1K | ImageNet-1K | ImageNet-1K | ImageNet-1K | ImageNet-1K | ImageNet-1K | ImageNet-1K |
|:----------:|:-------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|
| | | DieT-Tiny | DieT-Small | DieT-Base | Swin-Tiny | PVT-Tiny | PVT-Small | ConvNeXt-Tiny |
| MixUp | ICLR'2018 | 74.69 | 77.72 | 78.98 | 81.01 | 75.24 | 78.69 | 80.88 |
| CutMix | ICCV'2019 | 74.23 | 80.13 | 81.61 | 81.23 | 75.53 | 79.64 | 81.57 |
| FMix | arXiv'2020 | 74.41 | 77.37 | - | 79.60 | 75.28 | 78.72 | 81.04 |
| ResizeMix | arXiv'2020 | 74.79 | 78.61 | 80.89 | 81.36 | 76.05 | 79.55 | 81.64 |
| SaliencyMix | ICLR'2021 | 74.17 | 79.88 | 80.72 | 81.37 | 75.71 | 79.69 | 81.33 |
| Attentive-CutMix | ICASSP'2020 | 74.07 | 80.32 | 82.42 | 81.29 | 74.98 | 79.84 | 81.14 |
| PuzzleMix | ICML'2020 | 73.85 | 80.45 | 81.63 | 81.47 | 75.48 | 79.70 | 81.48 |
| AutoMix | ECCV'2022 | 75.52 | 80.78 | 82.18 | 81.80 | 76.38 | 80.64 | 82.28 |
| SAMix | arXiv'2021 | 75.83 | 80.94 | 82.85 | 81.87 | 76.60 | 80.78 | 82.35 |
| TransMix | CVPR'2022 | 74.56 | 80.68 | 82.51 | 81.80 | 75.50 | 80.50 | - |
| TokenMix | ECCV'2022 | 75.31 | 80.80 | 82.90 | 81.60 | 75.60 | - | 73.97 |
| TL-Align | ICCV'2023 | 73.20 | 80.60 | 82.30 | 81.40 | 75.50 | 80.40 | - |
| SMMix | ICCV'2023 | 75.56 | 81.10 | 82.90 | 81.80 | 75.60 | 81.03 | - |
| Mixpro | ICLR'2023 | 73.80 | 81.30 | 82.90 | 82.80 | 76.70 | 81.20 | - |
| LUMix | ICASSP'2024 | - | 80.60 | 80.20 | 81.70 | - | - | 82.50 |## Related Datasets Link
**Summary of datasets for mixup methods tasks. Link to dataset websites is provided.**
| Dataset | Type | Label | Task | Total data number | Link |
|:-------:|:----:|:-----:|:----:|:-----------------:|:----:|
| MINIST | Image | 10 | Classification | 70,000 | [MINIST](https://yann.lecun.com/exdb/mnist/) |
| Fashion-MNIST | Image | 10 | Classification | 70,000 | [Fashion-MINIST](https://github.com/zalandoresearch/fashion-mnist) |
| CIFAR10 | Image | 10 | Classification | 60,000 | [CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) |
| CIFAR100 | Image | 100 | Classification | 60,000 | [CIFAR100](https://www.cs.toronto.edu/~kriz/cifar.html) |
| SVHN | Image | 10 | Classification | 630,420 | [SVHN](http://ufldl.stanford.edu/housenumbers/) |
| GTSRB | Image | 43 | Classification | 51,839 | [GTSRB](https://benchmark.ini.rub.de/gtsrb_dataset.html) |
| STL10 | Image | 10 | Classification | 113,000 | [STL10](https://cs.stanford.edu/~acoates/stl10/) |
| Tiny-ImageNet | Image | 200 | Classification | 100,000 | [Tiny-ImageNet](http://cs231n.stanford.edu/tiny-imagenet-200.zip) |
| ImageNet-1K | Image | 1,000 | Classification | 1,431,167 | [ImageNet-1K](https://image-net.org/challenges/LSVRC/2012/)|
| CUB-200-2011 | Image | 200 | Classification, Object Detection | 11,788 | [CUB-200-2011](https://www.vision.caltech.edu/datasets/cub_200_2011/) |
| FGVC-Aircraft | Image | 102 | Classification | 10,200 | [FGVC-Aircraft](https://www.robots.ox.ac.uk/~vgg/data/fgvc-aircraft/) |
| StanfordCars | Image | 196 | Classification | 16,185 | [StanfordCars](https://ai.stanford.edu/$/sim20jkrause/cars/car_dataset.html) |
| Oxford Flowers | Image | 102 | Classification | 8,189 | [Oxford Flowers](https://www.robots.ox.ac.uk/~vgg/data/flowers/102/) |
| Caltech101 | Image | 101 | Classification | 9,000 | [Caltech101](https://data.caltech.edu/records/mzrjq-6wc02) |
| SOP | Image | 22,634 | Classification | 120,053 | [SOP](https://cvgl.stanford.edu/projects/lifted_struct/) |
| Food-101 | Image | 101 | Classification | 101,000 | [Food-101](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/) |
| SUN397 | Image | 899 | Classification | 130,519 | [SUN397](https://vision.princeton.edu/projects/2010/SUN//) |
| iNaturalist | Image | 5,089 | Classification | 675,170 | [iNaturalist](https://github.com/visipedia/inat_comp/tree/master/2017) |
| CIFAR-C | Image | 10,100 | Corruption Classification | 60,000 | [CIFAR-C](https://github.com/hendrycks/robustness/) |
| CIFAR-LT | Image | 10,100 | Long-tail Classification | 60,000 | [CIFAR-LT](https://github.com/hendrycks/robustness/) |
| ImageNet-1K-C | Image | 1,000 | Corruption Classification | 1,431,167 | [ImageNet-1K-C](https://github.com/hendrycks/robustness/) |
| ImageNet-A | Image | 200 | Classification | 7,500 | [ImageNet-A](https://github.com/hendrycks/natural-adv-examples) |
| Pascal VOC 102 | Image | 20 | Object Detection | 33,043 | [Pascal VOC 102](http://host.robots.ox.ac.uk/pascal/VOC/) |
| MS-COCO Detection | Image | 91 | Object Detection | 164,062 | [MS-COCO Detection](https://cocodataset.org/detection-eval) |
| DSprites | Image | 737,280*6 | Disentanglement | 737,280 | [DSprites](https://github.com/google-deepmind/dsprites-dataset) |
| Place205 | Image | 205 | Recognition | 2,500,000 | [Place205](http://places.csail.mit.edu/downloadData.html) |
| Pascal Context | Image | 459 | Segmentation | 10,103 | [Pascal Context](http://places.csail.mit.edu/downloadData.html) |
| ADE20K | Image | 3,169 | Segmentation | 25,210 | [ADE20K](https://groups.csail.mit.edu/vision/datasets/ADE20K/) |
| Cityscapes | Image | 19 | Segmentation | 5,000 | [Cityscapes](https://deepmind.google/) |
| StreetHazards | Image | 12 | Segmentation | 7,656 | [StreetHazards](https://www.v7labs.com/open-datasets/streethazards-dataset) |
| PACS | Image | 7*4 | Domain Classification | 9,991 | [PACS](https://domaingeneralization.github.io/) |
| BRACS | Medical Image | 7 | Classification | 4,539 | [BRACS](https://www.bracs.icar.cnr.it/) |
| BACH | Medical Image | 4 | Classification | 400 | [BACH](https://iciar2018-challenge.grand-challenge.org/) |
| CAME-Lyon16 | Medical Image | 2 | Anomaly Detection | 360 | [CAME-Lyon16](https://camelyon16.grand-challenge.org/) |
| Chest X-Ray | Medical Image | 2 | Anomaly Detection | 5,856 | [Chest X-Ray](https://data.mendeley.com/datasets/rscbjbr9sj/2) |
| BCCD | Medical Image | 4,888 | Object Detection | 364 | [BCCD](https://github.com/Shenggan/BCCD_Dataset) |
| TJU600 | Palm-Vein Image | 600 | Classification | 12,000 | [TJU600](https://cslinzhang.github.io/ContactlessPalm/) |
| VERA220 | Palm-Vein Image | 220 | Classification | 2,200 | [VERA220](https://www.idiap.ch/en/scientific-research/data/vera-palmvein) |
| CoNLL2003 | Text | 4 | Classification | 2,302 | [CoNLL2003](https://data.deepai.org/conll2003.zip) |
| 20 Newsgroups | Text | 20 | OOD Detection | 20,000 | [20 Newsgroups](http://qwone.com/~jason/20Newsgroups/) |
| WOS | Text | 134 | OOD Detection | 46,985 | [WOS](http://archive.ics.uci.edu/index.php) |
| SST-2 | Text | 2 | Sentiment Understanding | 68,800 | [SST-2](https://github.com/YJiangcm/SST-2-sentiment-analysis) |
| Cora | Graph | 7 | Node Classification | 2,708 | [Cora](https://github.com/phanein/deepwalk) |
| Citeseer | Graph | 6 | Node Classification | 3,312 | [Citeseer](https://csxstatic.ist.psu.edu/) |
| PubMed | Graph | 3 | Node Classification | 19,717 | [PubMed](https://pubmed.ncbi.nlm.nih.gov) |
| BlogCatalog | Graph | 39 | Node Classification | 10,312 | [BlogCatalog](https://figshare.com/articles/dataset/BlogCatalog_dataset/11923611?file=22349970) |
| Google Commands | Speech | 30 | Classification | 65,000 | [Google Commands](https://research.google/blog/launching-the-speech-commands-dataset/) |
| VoxCeleb2 | Speech | 6,112 | Sound Classification | 1,000,000+ | [VoxCeleb2](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/) |
| VCTK | Speech | 110 | Enhancement | 44,000 | [VCTK](https://datashare.ed.ac.uk/handle/10283/2791) |
| ModelNet40 | 3D Point Cloud | 40 | Classification | 12,311 | [ModelNet40](https://modelnet.cs.princeton.edu/) |
| ScanObjectNN | 3D Point Cloud | 15 | Classification | 15,000 | [ScanObjectNN](https://hkust-vgd.github.io/scanobjectnn/) |
| ShapeNet | 3D Point Cloud | 16 | Recognition, Classification | 16,880 | [ShapeNet](https://shapenet.org/) |
| KITTI360 | 3D Point Cloud | 80,256 | Detection, Segmentation | 14,999 | [KITTI360](https://www.cvlibs.net/datasets/kitti/) |
| UCF101 | Video | 101 | Action Recognition | 13,320 | [UCF101](https://www.crcv.ucf.edu/research/data-sets/ucf101/) |
| Kinetics400 | Video | 400 | Action Recognition | 260,000 | [Kinetics400](https://deepmind.google/) |
| Airfoil | Tabular | - | Regression | 1,503 | [Airfoil](https://archive.ics.uci.edu/dataset/291/airfoil+self+noise) |
| NO2 | Tabular | - | Regression | 500 | [NO2](https://drive.google.com/drive/folders/1pTRT7fA-hq6p1F7ZX5oJ0tg_I1RRG6OW) |
| Exchange-Rate | Timeseries | - | Regression | 7,409 | [Exchange-Rate](https://github.com/laiguokun/multivariate-time-series-data) |
| Electricity | Timeseries | - | Regression | 26,113 | [Electricity](https://github.com/laiguokun/multivariate-time-series-data) |## Contribution
Feel free to send [pull requests](https://github.com/Westlake-AI/openmixup/pulls) to add more links with the following Markdown format. Note that the abbreviation, the code link, and the figure link are optional attributes.
```markdown
* **TITLE**
*AUTHER*
PUBLISH'YEAR [[Paper](link)] [[Code](link)]
ABBREVIATION Framework
```## Citation
If you feel that our work has contributed to your research, please cite it, thanks. 🥰
```markdown
@article{jin2024survey,
title={A Survey on Mixup Augmentations and Beyond},
author={Jin, Xin and Zhu, Hongyu and Li, Siyuan and Wang, Zedong and Liu, Zicheng and Yu, Chang and Qin, Huafeng and Li, Stan Z},
journal={arXiv preprint arXiv:2409.05202},
year={2024}
}
```Current contributors include: Siyuan Li ([@Lupin1998](https://github.com/Lupin1998)), Xin Jin ([@JinXins](https://github.com/JinXins)), Zicheng Liu ([@pone7](https://github.com/pone7)), and Zedong Wang ([@Jacky1128](https://github.com/Jacky1128)). We thank all contributors for `Awesome-Mixup`!
## License
This project is released under the [Apache 2.0 license](LICENSE).
## Acknowledgement
This repository is built using the [OpenMixup](https://github.com/Westlake-AI/openmixup) library and [Awesome README](https://github.com/matiassingers/awesome-readme) repository.
## Related Project
- [OpenMixup](https://github.com/Westlake-AI/openmixup): CAIRI Supervised, Semi- and Self-Supervised Visual Representation Learning Toolbox and Benchmark.
- [Awesome-Mix](https://github.com/ChengtaiCao/Awesome-Mix): An awesome list of papers for `A Survey of Mix-based Data Augmentation: Taxonomy, Methods, Applications, and Explainability, we categorize them based on our proposed taxonomy`.
- [survery-image-mixing-and-deleting-for-data-augmentation](https://github.com/humza909/survery-image-mixing-and-deleting-for-data-augmentation): An awesome list of papers for `Survey: Image Mixing and Deleting for Data Augmentation`.
- [awesome-mixup](https://github.com/demoleiwang/awesome-mixup): A collection of awesome papers about mixup.
- [awesome-mixed-sample-data-augmentation](https://github.com/JasonZhang156/awesome-mixed-sample-data-augmentation): A collection of awesome things about mixed sample data augmentation.
- [data-augmentation-review](https://github.com/AgaMiko/data-augmentation-review): List of useful data augmentation resources.
- [Awesome-Mixup](https://arxiv.org/abs/2409.05202): An awesome list of papers for `A Survey on Mixup Augmentations and Beyond`.