Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/Westlake-AI/Awesome-Mixup

Awesome List of Mixup Augmentation Papers for Visual Representation Learning
https://github.com/Westlake-AI/Awesome-Mixup

List: Awesome-Mixup

awesome-list awesome-mixup computer-vision data-augmentation deep-learning image-classification mixup self-supervised-learning

Last synced: about 1 month ago
JSON representation

Awesome List of Mixup Augmentation Papers for Visual Representation Learning

Awesome Lists containing this project

README

        

# Awesome-Mixup

[![Awesome](https://awesome.re/badge.svg)](https://awesome.re) ![GitHub stars](https://img.shields.io/github/stars/Westlake-AI/Awesome-Mixup?color=green) ![GitHub forks](https://img.shields.io/github/forks/Westlake-AI/Awesome-Mixup?color=yellow&label=Fork)

## Introduction

**We summarize awesome mixup data augmentation methods for visual representation learning in various scenarios.**

The list of awesome mixup augmentation methods is summarized in chronological order and is on updating. The main branch is modified according to [Awesome-Mixup](https://github.com/Westlake-AI/openmixup/docs/en/awesome_mixups) in [OpenMixup](https://github.com/Westlake-AI/openmixup) and [Awesome-Mix](https://github.com/ChengtaiCao/Awesome-Mix), and we are working on a comperhensive survey on mixup augmentations. We first summarize fundamental mixup methods from two aspects: *sample mixup policy* and *label mixup policy*. Then, we summarize mixup techniques for self- and semi-supervised learning and various downstream tasks.

* To find related papers and their relationships, check out [Connected Papers](https://www.connectedpapers.com/), which visualizes the academic field in a graph representation.
* To export BibTeX citations of papers, check out [ArXiv](https://arxiv.org/) or [Semantic Scholar](https://www.semanticscholar.org/) of the paper for professional reference formats.

## Table of Contents

- [Awesome-Mixup](#awesome-mixup)
- [Introduction](#introduction)
- [Table of Contents](#table-of-contents)
- [Fundermental Methods](#fundermental-methods)
- [Sample Mixup Methods](#sample-mixup-methods)
- [**Pre-defined Policies**](#pre-defined-policies)
- [**Adaptive Policies**](#adaptive-policies)
- [Label Mixup Methods](#label-mixup-methods)
- [Mixup for Self-supervised Learning](#mixup-for-self-supervised-learning)
- [Mixup for Semi-supervised Learning](#mixup-for-semi-supervised-learning)
- [Mixup for Regression](#mixup-for-regression)
- [Mixup for Robustness](#mixup-for-robustness)
- [Mixup for Multi-modality](#mixup-for-multi-modality)
- [Analysis of Mixup](#analysis-of-mixup)
- [Natural Language Processing](#natural-language-processing)
- [Graph Representation Learning](#graph-representation-learning)
- [Survey](#survey)
- [Benchmark](#benchmark)
- [Contribution](#contribution)
- [License](#license)
- [Acknowledgement](#acknowledgement)
- [Related Project](#related-project)

## Fundermental Methods

### Sample Mixup Methods

#### **Pre-defined Policies**

* **mixup: Beyond Empirical Risk Minimization**

*Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz*

ICLR'2018 [[Paper](https://arxiv.org/abs/1710.09412)]
[[Code](https://github.com/facebookresearch/mixup-cifar10)]

MixUp Framework


* **Between-class Learning for Image Classification**

*Yuji Tokozume, Yoshitaka Ushiku, Tatsuya Harada*

CVPR'2018 [[Paper](https://arxiv.org/abs/1711.10284)]
[[Code](https://github.com/mil-tokyo/bc_learning_image)]

BC Framework


* **MixUp as Locally Linear Out-Of-Manifold Regularization**

*Hongyu Guo, Yongyi Mao, Richong Zhang*

AAAI'2019 [[Paper](https://arxiv.org/abs/1809.02499)]

AdaMixup Framework


* **CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features**

*Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, Youngjoon Yoo*

ICCV'2019 [[Paper](https://arxiv.org/abs/1905.04899)]
[[Code](https://github.com/clovaai/CutMix-PyTorch)]

CutMix Framework


* **Manifold Mixup: Better Representations by Interpolating Hidden States**

*Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, David Lopez-Paz, Yoshua Bengio*

ICML'2019 [[Paper](https://arxiv.org/abs/1806.05236)]
[[Code](https://github.com/vikasverma1077/manifold_mixup)]

ManifoldMix Framework


* **Improved Mixed-Example Data Augmentation**

*Cecilia Summers, Michael J. Dinneen*

WACV'2019 [[Paper](https://arxiv.org/abs/1805.11272)]
[[Code](https://github.com/ceciliaresearch/MixedExample)]

MixedExamples Framework


* **FMix: Enhancing Mixed Sample Data Augmentation**

*Ethan Harris, Antonia Marcu, Matthew Painter, Mahesan Niranjan, Adam Prügel-Bennett, Jonathon Hare*

Arixv'2020 [[Paper](https://arxiv.org/abs/2002.12047)]
[[Code](https://github.com/ecs-vlc/FMix)]

FMix Framework


* **SmoothMix: a Simple Yet Effective Data Augmentation to Train Robust Classifiers**

*Jin-Ha Lee, Muhammad Zaigham Zaheer, Marcella Astrid, Seung-Ik Lee*

CVPRW'2020 [[Paper](https://openaccess.thecvf.com/content_CVPRW_2020/html/w45/Lee_SmoothMix_A_Simple_Yet_Effective_Data_Augmentation_to_Train_Robust_CVPRW_2020_paper.html)]
[[Code](https://github.com/Westlake-AI/openmixup)]

SmoothMix Framework


* **PatchUp: A Regularization Technique for Convolutional Neural Networks**

*Mojtaba Faramarzi, Mohammad Amini, Akilesh Badrinaaraayanan, Vikas Verma, Sarath Chandar*

Arxiv'2020 [[Paper](https://arxiv.org/abs/2006.07794)]
[[Code](https://github.com/chandar-lab/PatchUp)]

PatchUp Framework


* **GridMix: Strong regularization through local context mapping**

*Kyungjune Baek, Duhyeon Bang, Hyunjung Shim*

Pattern Recognition'2021 [[Paper](https://www.sciencedirect.com/science/article/pii/S0031320320303976)]
[[Code](https://github.com/IlyaDobrynin/GridMixup)]

GridMixup Framework


* **ResizeMix: Mixing Data with Preserved Object Information and True Labels**

*Jie Qin, Jiemin Fang, Qian Zhang, Wenyu Liu, Xingang Wang, Xinggang Wang*

Arixv'2020 [[Paper](https://arxiv.org/abs/2012.11101)]
[[Code](https://github.com/Westlake-AI/openmixup)]

ResizeMix Framework


* **Where to Cut and Paste: Data Regularization with Selective Features**

*Jiyeon Kim, Ik-Hee Shin, Jong-Ryul, Lee, Yong-Ju Lee*

ICTC'2020 [[Paper](https://ieeexplore.ieee.org/abstract/document/9289404)]
[[Code](https://github.com/google-research/augmix)]

FocusMix Framework


* **AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty**

*Dan Hendrycks, Norman Mu, Ekin D. Cubuk, Barret Zoph, Justin Gilmer, Balaji Lakshminarayanan*

ICLR'2020 [[Paper](https://arxiv.org/abs/1912.02781)]
[[Code](https://github.com/google-research/augmix)]

AugMix Framework


* **DJMix: Unsupervised Task-agnostic Augmentation for Improving Robustness**

*Ryuichiro Hataya, Hideki Nakayama*

Arxiv'2021 [[Paper](https://openreview.net/pdf?id=0n3BaVlNsHI)]

DJMix Framework


* **PixMix: Dreamlike Pictures Comprehensively Improve Safety Measures**

*Dan Hendrycks, Andy Zou, Mantas Mazeika, Leonard Tang, Bo Li, Dawn Song, Jacob Steinhardt*

Arxiv'2021 [[Paper](https://arxiv.org/abs/2112.05135)]
[[Code](https://github.com/andyzoujm/pixmix)]

PixMix Framework


* **StyleMix: Separating Content and Style for Enhanced Data Augmentation**

*Minui Hong, Jinwoo Choi, Gunhee Kim*

CVPR'2021 [[Paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Hong_StyleMix_Separating_Content_and_Style_for_Enhanced_Data_Augmentation_CVPR_2021_paper.pdf)]
[[Code](https://github.com/alsdml/StyleMix)]

StyleMix Framework


* **Domain Generalization with MixStyle**

*Kaiyang Zhou, Yongxin Yang, Yu Qiao, Tao Xiang*

ICLR'2021 [[Paper](https://openreview.net/forum?id=6xHJ37MVxxp)]
[[Code](https://github.com/KaiyangZhou/mixstyle-release)]

MixStyle Framework


* **On Feature Normalization and Data Augmentation**

*Boyi Li, Felix Wu, Ser-Nam Lim, Serge Belongie, Kilian Q. Weinberger*

CVPR'2021 [[Paper](https://arxiv.org/abs/2002.11102)]
[[Code](https://github.com/Boyiliee/MoEx)]

MoEx Framework


* **Guided Interpolation for Adversarial Training**

*Chen Chen, Jingfeng Zhang, Xilie Xu, Tianlei Hu, Gang Niu, Gang Chen, Masashi Sugiyama*

ArXiv'2021 [[Paper](https://arxiv.org/abs/2102.07327)]

GIF Framework


* **Observations on K-image Expansion of Image-Mixing Augmentation for Classification**

*Joonhyun Jeong, Sungmin Cha, Youngjoon Yoo, Sangdoo Yun, Taesup Moon, Jongwon Choi*

IEEE Access'2021 [[Paper](https://arxiv.org/abs/2110.04248)]
[[Code](https://github.com/yjyoo3312/DCutMix-PyTorch)]

DCutMix Framework


* **Noisy Feature Mixup**

*Soon Hoe Lim, N. Benjamin Erichson, Francisco Utrera, Winnie Xu, Michael W. Mahoney*

ICLR'2022 [[Paper](https://arxiv.org/abs/2110.02180)]
[[Code](https://github.com/erichson/NFM)]

NFM Framework


* **Preventing Manifold Intrusion with Locality: Local Mixup**

*Raphael Baena, Lucas Drumetz, Vincent Gripon*

EUSIPCO'2022 [[Paper](https://arxiv.org/abs/2201.04368)]
[[Code](https://github.com/raphael-baena/Local-Mixup)]

LocalMix Framework


* **RandomMix: A mixed sample data augmentation method with multiple mixed modes**

*Xiaoliang Liu, Furao Shen, Jian Zhao, Changhai Nie*

ArXiv'2022 [[Paper](https://arxiv.org/abs/2205.08728)]

RandomMix Framework


* **SuperpixelGridCut, SuperpixelGridMean and SuperpixelGridMix Data Augmentation**

*Karim Hammoudi, Adnane Cabani, Bouthaina Slika, Halim Benhabiles, Fadi Dornaika, Mahmoud Melkemi*

ArXiv'2022 [[Paper](https://arxiv.org/abs/2204.08458)]
[[Code](https://github.com/hammoudiproject/SuperpixelGridMasks)]

SuperpixelGridCut Framework


* **AugRmixAT: A Data Processing and Training Method for Improving Multiple Robustness and Generalization Performance**

*Xiaoliang Liu, Furao Shen, Jian Zhao, Changhai Nie*

ICME'2022 [[Paper](https://arxiv.org/abs/2207.10290)]

AugRmixAT Framework


* **A Unified Analysis of Mixed Sample Data Augmentation: A Loss Function Perspective**

*Chanwoo Park, Sangdoo Yun, Sanghyuk Chun*

NIPS'2022 [[Paper](https://arxiv.org/abs/2208.09913)]
[[Code](https://github.com/naver-ai/hmix-gmix)]

MSDA Framework


* **RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and Out Distribution Robustness**

*Francesco Pinto, Harry Yang, Ser-Nam Lim, Philip H.S. Torr, Puneet K. Dokania*

NIPS'2022 [[Paper](https://arxiv.org/abs/2206.14502)]
[[Code](https://github.com/FrancescoPinto/RegMixup)]

RegMixup Framework


* **ContextMix: A context-aware data augmentation method for industrial visual inspection systems**

*Hyungmin Kim, Donghun Kim, Pyunghwan Ahn, Sungho Suh, Hansang Cho, Junmo Kim*

EAAI'2024 [[Paper](https://arxiv.org/abs/2401.10050)]

ConvtextMix Framework


(back to top)

#### **Adaptive Policies**

* **SaliencyMix: A Saliency Guided Data Augmentation Strategy for Better Regularization**

*A F M Shahab Uddin and Mst. Sirazam Monira and Wheemyung Shin and TaeChoong Chung and Sung-Ho Bae*

ICLR'2021 [[Paper](https://arxiv.org/abs/2006.01791)]
[[Code](https://github.com/SaliencyMix/SaliencyMix)]

SaliencyMix Framework


* **Attentive CutMix: An Enhanced Data Augmentation Approach for Deep Learning Based Image Classification**

*Devesh Walawalkar, Zhiqiang Shen, Zechun Liu, Marios Savvides*

ICASSP'2020 [[Paper](https://arxiv.org/abs/2003.13048)]
[[Code](https://github.com/xden2331/attentive_cutmix)]

AttentiveMix Framework


* **SnapMix: Semantically Proportional Mixing for Augmenting Fine-grained Data**

*Shaoli Huang, Xinchao Wang, Dacheng Tao*

AAAI'2021 [[Paper](https://arxiv.org/abs/2012.04846)]
[[Code](https://github.com/Shaoli-Huang/SnapMix)]

SnapMix Framework


* **Attribute Mix: Semantic Data Augmentation for Fine Grained Recognition**

*Hao Li, Xiaopeng Zhang, Hongkai Xiong, Qi Tian*

VCIP'2020 [[Paper](https://arxiv.org/abs/2004.02684)]

AttributeMix Framework


* **On Adversarial Mixup Resynthesis**

*Christopher Beckham, Sina Honari, Vikas Verma, Alex Lamb, Farnoosh Ghadiri, R Devon Hjelm, Yoshua Bengio, Christopher Pal*

NIPS'2019 [[Paper](https://arxiv.org/abs/1903.02709)]
[[Code](https://github.com/christopher-beckham/amr)]

AMR Framework


* **Patch-level Neighborhood Interpolation: A General and Effective Graph-based Regularization Strategy**

*Ke Sun, Bing Yu, Zhouchen Lin, Zhanxing Zhu*

ArXiv'2019 [[Paper](https://arxiv.org/abs/1911.09307)]

Pani VAT Framework


* **AutoMix: Mixup Networks for Sample Interpolation via Cooperative Barycenter Learning**

*Jianchao Zhu, Liangliang Shi, Junchi Yan, Hongyuan Zha*

ECCV'2020 [[Paper](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123550630.pdf)]

AutoMix Framework


* **PuzzleMix: Exploiting Saliency and Local Statistics for Optimal Mixup**

*Jang-Hyun Kim, Wonho Choo, Hyun Oh Song*

ICML'2020 [[Paper](https://arxiv.org/abs/2009.06962)]
[[Code](https://github.com/snu-mllab/PuzzleMix)]

PuzzleMix Framework


* **Co-Mixup: Saliency Guided Joint Mixup with Supermodular Diversity**

*Jang-Hyun Kim, Wonho Choo, Hosan Jeong, Hyun Oh Song*

ICLR'2021 [[Paper](https://arxiv.org/abs/2102.03065)]
[[Code](https://github.com/snu-mllab/Co-Mixup)]

Co-Mixup Framework


* **SuperMix: Supervising the Mixing Data Augmentation**

*Ali Dabouei, Sobhan Soleymani, Fariborz Taherkhani, Nasser M. Nasrabadi*

CVPR'2021 [[Paper](https://arxiv.org/abs/2003.05034)]
[[Code](https://github.com/alldbi/SuperMix)]

SuperMix Framework


* **Evolving Image Compositions for Feature Representation Learning**

*Paola Cascante-Bonilla, Arshdeep Sekhon, Yanjun Qi, Vicente Ordonez*

BMVC'2021 [[Paper](https://arxiv.org/abs/2106.09011)]

PatchMix Framework


* **StackMix: A complementary Mix algorithm**

*John Chen, Samarth Sinha, Anastasios Kyrillidis*

UAI'2022 [[Paper](https://arxiv.org/abs/2011.12618)]

StackMix Framework


* **SalfMix: A Novel Single Image-Based Data Augmentation Technique Using a Saliency Map**

*Jaehyeop Choi, Chaehyeon Lee, Donggyu Lee, Heechul Jung*

Sensor'2021 [[Paper](https://pdfs.semanticscholar.org/1db9/c80edeed50858783c69237aeba764750e8b7.pdf?_ga=2.182064935.1813772674.1674154381-1810295069.1625160008)]

SalfMix Framework


* **k-Mixup Regularization for Deep Learning via Optimal Transport**

*Kristjan Greenewald, Anming Gu, Mikhail Yurochkin, Justin Solomon, Edward Chien*

ArXiv'2021 [[Paper](https://arxiv.org/abs/2106.02933)]

k-Mixup Framework


* **AlignMix: Improving representation by interpolating aligned features**

*Shashanka Venkataramanan, Ewa Kijak, Laurent Amsaleg, Yannis Avrithis*

CVPR'2022 [[Paper](https://arxiv.org/abs/2103.15375)]
[[Code](https://github.com/shashankvkt/AlignMixup_CVPR22)]

AlignMix Framework


* **AutoMix: Unveiling the Power of Mixup for Stronger Classifiers**

*Zicheng Liu, Siyuan Li, Di Wu, Zihan Liu, Zhiyuan Chen, Lirong Wu, Stan Z. Li*

ECCV'2022 [[Paper](https://arxiv.org/abs/2103.13027)]
[[Code](https://github.com/Westlake-AI/openmixup)]

AutoMix Framework


* **Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup**

*Siyuan Li, Zicheng Liu, Di Wu, Zihan Liu, Stan Z. Li*

Arxiv'2021 [[Paper](https://arxiv.org/abs/2111.15454)]
[[Code](https://github.com/Westlake-AI/openmixup)]

SAMix Framework


* **ScoreNet: Learning Non-Uniform Attention and Augmentation for Transformer-Based Histopathological Image Classification**

*Thomas Stegmüller, Behzad Bozorgtabar, Antoine Spahr, Jean-Philippe Thiran*

Arxiv'2022 [[Paper](https://arxiv.org/abs/2202.07570)]

ScoreMix Framework


* **RecursiveMix: Mixed Learning with History**

*Lingfeng Yang, Xiang Li, Borui Zhao, Renjie Song, Jian Yang*

NIPS'2022 [[Paper](https://arxiv.org/abs/2203.06844)]
[[Code](https://github.com/implus/RecursiveMix-pytorch)]

RecursiveMix Framework


* **Expeditious Saliency-guided Mix-up through Random Gradient Thresholding**

*Remy Sun, Clement Masson, Gilles Henaff, Nicolas Thome, Matthieu Cord.*

ICPR'2022 [[Paper](https://arxiv.org/abs/2205.10158)]

SciMix Framework


* **TransformMix: Learning Transformation and Mixing Strategies for Sample-mixing Data Augmentation**

*Tsz-Him Cheung, Dit-Yan Yeung.*<\br>
OpenReview'2023 [[Paper](https://openreview.net/forum?id=-1vpxBUtP0B)]

TransformMix Framework


* **GuidedMixup: An Efficient Mixup Strategy Guided by Saliency Maps**

*Minsoo Kang, Suhyun Kim*

AAAI'2023 [[Paper](https://arxiv.org/abs/2306.16612)]

GuidedMixup Framework


* **MixPro: Data Augmentation with MaskMix and Progressive Attention Labeling for Vision Transformer**

*Qihao Zhao, Yangyu Huang, Wei Hu, Fan Zhang, Jun Liu*

ICLR'2023 [[Paper](https://openreview.net/forum?id=dRjWsd3gwsm)]
[[Code](https://github.com/fistyee/MixPro)]

MixPro Framework


* **Expeditious Saliency-guided Mix-up through Random Gradient Thresholding**

*Minh-Long Luu, Zeyi Huang, Eric P.Xing, Yong Jae Lee, Haohan Wang*

2nd Practical-DL Workshop @ AAAI'23 [[Paper](https://arxiv.org/abs/2212.04875)]
[[Code](https://github.com/minhlong94/Random-Mixup)]

R-Mix and R-LMix Framework


* **SMMix: Self-Motivated Image Mixing for Vision Transformers**

*Mengzhao Chen, Mingbao Lin, ZhiHang Lin, Yuxin Zhang, Fei Chao, Rongrong Ji*

ICCV'2023 [[Paper](https://arxiv.org/abs/2212.12977)]
[[Code](https://github.com/chenmnz/smmix)]

SMMix Framework


* **Embedding Space Interpolation Beyond Mini-Batch, Beyond Pairs and Beyond Examples**

*Shashanka Venkataramanan, Ewa Kijak, Laurent Amsaleg, Yannis Avrithis*

NeurIPS'2023 [[Paper](https://arxiv.org/abs/2206.14868)]

MultiMix Framework


* **GradSalMix: Gradient Saliency-Based Mix for Image Data Augmentation**

*Tao Hong, Ya Wang, Xingwu Sun, Fengzong Lian, Zhanhui Kang, Jinwen Ma*

ICME'2023 [[Paper](https://ieeexplore.ieee.org/abstract/document/10219625)]

GradSalMix Framework


* **LGCOAMix: Local and Global Context-and-Object-Part-Aware Superpixel-Based Data Augmentation for Deep Visual Recognition**

*Fadi Dornaika, Danyang Sun*

TIP'2023 [[Paper](https://ieeexplore.ieee.org/document/10348509)]
[[Code](https://github.com/DanielaPlusPlus/LGCOAMix)]

LGCOAMix Framework


* **Catch-Up Mix: Catch-Up Class for Struggling Filters in CNN**

*Minsoo Kang, Minkoo Kang, Suhyun Kim*

AAAI'2024 [[Paper](https://arxiv.org/abs/2401.13193)]

Catch-Up-Mix Framework


* **Adversarial AutoMixup**

*Huafeng Qin, Xin Jin, Yun Jiang, Mounim A. El-Yacoubi, Xinbo Gao*

ICLR'2024 [[Paper](https://arxiv.org/abs/2312.11954)]
[[Code](https://github.com/jinxins/adversarial-automixup)]

AdAutoMix Framework


(back to top)

### Label Mixup Methods

* **mixup: Beyond Empirical Risk Minimization**

*Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz*

ICLR'2018 [[Paper](https://arxiv.org/abs/1710.09412)]
[[Code](https://github.com/facebookresearch/mixup-cifar10)]

* **CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features**

*Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, Youngjoon Yoo*

ICCV'2019 [[Paper](https://arxiv.org/abs/1905.04899)]
[[Code](https://github.com/clovaai/CutMix-PyTorch)]

* **Metamixup: Learning adaptive interpolation policy of mixup with metalearning**

*Zhijun Mai, Guosheng Hu, Dexiong Chen, Fumin Shen, Heng Tao Shen*

TNNLS'2021 [[Paper](https://arxiv.org/abs/1908.10059)]

MetaMixup Framework


* **Mixup Without Hesitation**

*Hao Yu, Huanyu Wang, Jianxin Wu*

ICIG'2022 [[Paper](https://arxiv.org/abs/2101.04342)]
[[Code](https://github.com/yuhao318/mwh)]

* **Combining Ensembles and Data Augmentation can Harm your Calibration**

*Yeming Wen, Ghassen Jerfel, Rafael Muller, Michael W. Dusenberry, Jasper Snoek, Balaji Lakshminarayanan, Dustin Tran*

ICLR'2021 [[Paper](https://arxiv.org/abs/2010.09875)]
[[Code](https://github.com/google/edward2/tree/main/experimental/marginalization_mixup)]

CAMixup Framework


* **Combining Ensembles and Data Augmentation can Harm your Calibration**

*Zihang Jiang, Qibin Hou, Li Yuan, Daquan Zhou, Yujun Shi, Xiaojie Jin, Anran Wang, Jiashi Feng*

NIPS'2021 [[Paper](https://arxiv.org/abs/2104.10858)]
[[Code](https://github.com/zihangJiang/TokenLabeling)]

TokenLabeling Framework


* **Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated Label Mixing**

*Joonhyung Park, June Yong Yang, Jinwoo Shin, Sung Ju Hwang, Eunho Yang*

AAAI'2022 [[Paper](https://arxiv.org/abs/2112.08796)]

Saliency Grafting Framework


* **TransMix: Attend to Mix for Vision Transformers**

*Jie-Neng Chen, Shuyang Sun, Ju He, Philip Torr, Alan Yuille, Song Bai*

CVPR'2022 [[Paper](https://arxiv.org/abs/2111.09833)]
[[Code](https://github.com/Beckschen/TransMix)]

TransMix Framework


* **GenLabel: Mixup Relabeling using Generative Models**

*Jy-yong Sohn, Liang Shang, Hongxu Chen, Jaekyun Moon, Dimitris Papailiopoulos, Kangwook Lee*

ArXiv'2022 [[Paper](https://arxiv.org/abs/2201.02354)]

GenLabel Framework


* **Harnessing Hard Mixed Samples with Decoupled Regularizer**

*Zicheng Liu, Siyuan Li, Ge Wang, Cheng Tan, Lirong Wu, Stan Z. Li*

NIPS'2023 [[Paper](https://arxiv.org/abs/2203.10761)]
[[Code](https://github.com/Westlake-AI/openmixup)]

DecoupleMix Framework


* **TokenMix: Rethinking Image Mixing for Data Augmentation in Vision Transformers**

*Jihao Liu, Boxiao Liu, Hang Zhou, Hongsheng Li, Yu Liu*

ECCV'2022 [[Paper](https://arxiv.org/abs/2207.08409)]
[[Code](https://github.com/Sense-X/TokenMix)]

TokenMix Framework


* **Optimizing Random Mixup with Gaussian Differential Privacy**

*Donghao Li, Yang Cao, Yuan Yao*

arXiv'2022 [[Paper](https://arxiv.org/abs/2202.06467)]

* **TokenMixup: Efficient Attention-guided Token-level Data Augmentation for Transformers**

*Hyeong Kyu Choi, Joonmyung Choi, Hyunwoo J. Kim*

NIPS'2022 [[Paper](https://arxiv.org/abs/2210.07562)]
[[Code](https://github.com/mlvlab/TokenMixup)]

TokenMixup Framework


* **Token-Label Alignment for Vision Transformers**

*Han Xiao, Wenzhao Zheng, Zheng Zhu, Jie Zhou, Jiwen Lu*

arXiv'2022 [[Paper](https://arxiv.org/abs/2210.06455)]
[[Code](https://github.com/Euphoria16/TL-Align)]

TL-Align Framework


* **LUMix: Improving Mixup by Better Modelling Label Uncertainty**

*Shuyang Sun, Jie-Neng Chen, Ruifei He, Alan Yuille, Philip Torr, Song Bai*

arXiv'2022 [[Paper](https://arxiv.org/abs/2211.15846)]
[[Code](https://github.com/kevin-ssy/LUMix)]

LUMix Framework


* **MixupE: Understanding and Improving Mixup from Directional Derivative Perspective**

*Vikas Verma, Sarthak Mittal, Wai Hoh Tang, Hieu Pham, Juho Kannala, Yoshua Bengio, Arno Solin, Kenji Kawaguchi*

UAI'2023 [[Paper](https://arxiv.org/abs/2212.13381)]
[[Code](https://github.com/onehuster/mixupe)]

MixupE Framework


* **Infinite Class Mixup**

*Thomas Mensink, Pascal Mettes*

arXiv'2023 [[Paper](https://arxiv.org/abs/2305.10293)]

IC-Mixup Framework


* **Semantic Equivariant Mixup**

*Zongbo Han, Tianchi Xie, Bingzhe Wu, Qinghua Hu, Changqing Zhang*

arXiv'2023 [[Paper](https://arxiv.org/abs/2308.06451)]

SEM Framework


* **RankMixup: Ranking-Based Mixup Training for Network Calibration**

*Jongyoun Noh, Hyekang Park, Junghyup Lee, Bumsub Ham*

ICCV'2023 [[Paper](https://arxiv.org/abs/2308.11990)]
[[Code](https://cvlab.yonsei.ac.kr/projects/RankMixup)]

RankMixup Framework


* **G-Mix: A Generalized Mixup Learning Framework Towards Flat Minima**

*Xingyu Li, Bo Tang*

arXiv'2023 [[Paper](https://arxiv.org/abs/2308.03236)]

G-Mix Framework


(back to top)

## Mixup for Self-supervised Learning

* **MixCo: Mix-up Contrastive Learning for Visual Representation**

*Sungnyun Kim, Gihun Lee, Sangmin Bae, Se-Young Yun*

NIPSW'2020 [[Paper](https://arxiv.org/abs/2010.06300)]
[[Code](https://github.com/Lee-Gihun/MixCo-Mixup-Contrast)]

MixCo Framework


* **Hard Negative Mixing for Contrastive Learning**

*Yannis Kalantidis, Mert Bulent Sariyildiz, Noe Pion, Philippe Weinzaepfel, Diane Larlus*

NIPS'2020 [[Paper](https://arxiv.org/abs/2010.01028)]
[[Code](https://europe.naverlabs.com/mochi)]

MoCHi Framework


* **i-Mix A Domain-Agnostic Strategy for Contrastive Representation Learning**

*Kibok Lee, Yian Zhu, Kihyuk Sohn, Chun-Liang Li, Jinwoo Shin, Honglak Lee*

ICLR'2021 [[Paper](https://arxiv.org/abs/2010.08887)]
[[Code](https://github.com/kibok90/imix)]

i-Mix Framework


* **Un-Mix: Rethinking Image Mixtures for Unsupervised Visual Representation**

*Zhiqiang Shen, Zechun Liu, Zhuang Liu, Marios Savvides, Trevor Darrell, Eric Xing*

AAAI'2022 [[Paper](https://arxiv.org/abs/2003.05438)]
[[Code](https://github.com/szq0214/Un-Mix)]

Un-Mix Framework


* **Beyond Single Instance Multi-view Unsupervised Representation Learning**

*Xiangxiang Chu, Xiaohang Zhan, Xiaolin Wei*

BMVC'2022 [[Paper](https://arxiv.org/abs/2011.13356)]

BSIM Framework


* **Improving Contrastive Learning by Visualizing Feature Transformation**

*Rui Zhu, Bingchen Zhao, Jingen Liu, Zhenglong Sun, Chang Wen Chen*

ICCV'2021 [[Paper](https://arxiv.org/abs/2108.02982)]
[[Code](https://github.com/DTennant/CL-Visualizing-Feature-Transformation)]

FT Framework


* **Piecing and Chipping: An effective solution for the information-erasing view generation in Self-supervised Learning**

*Jingwei Liu, Yi Gu, Shentong Mo, Zhun Sun, Shumin Han, Jiafeng Guo, Xueqi Cheng*

OpenReview'2021 [[Paper](https://openreview.net/forum?id=DnG8f7gweH4)]

PCEA Framework


* **Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Background Mixing**

*Aadarsh Sahoo, Rutav Shah, Rameswar Panda, Kate Saenko, Abir Das*

NIPS'2021 [[Paper](https://arxiv.org/abs/2011.02697)]
[[Code](https://cvir.github.io/projects/comix)]

CoMix Framework


* **Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup**

*Siyuan Li, Zicheng Liu, Di Wu, Zihan Liu, Stan Z. Li*

Arxiv'2021 [[Paper](https://arxiv.org/abs/2111.15454)]
[[Code](https://github.com/Westlake-AI/openmixup)]

SAMix Framework


* **MixSiam: A Mixture-based Approach to Self-supervised Representation Learning**

*Xiaoyang Guo, Tianhao Zhao, Yutian Lin, Bo Du*

OpenReview'2021 [[Paper](https://arxiv.org/abs/2111.02679)]

MixSiam Framework


* **Mix-up Self-Supervised Learning for Contrast-agnostic Applications**

*Yichen Zhang, Yifang Yin, Ying Zhang, Roger Zimmermann*

ICME'2021 [[Paper](https://arxiv.org/abs/2204.00901)]

MixSSL Framework


* **Towards Domain-Agnostic Contrastive Learning**

*Vikas Verma, Minh-Thang Luong, Kenji Kawaguchi, Hieu Pham, Quoc V. Le*

ICML'2021 [[Paper](https://arxiv.org/abs/2011.04419)]

DACL Framework


* **Center-wise Local Image Mixture For Contrastive Representation Learning**

*Hao Li, Xiaopeng Zhang, Hongkai Xiong*

BMVC'2021 [[Paper](https://arxiv.org/abs/2011.02697)]

CLIM Framework


* **Contrastive-mixup Learning for Improved Speaker Verification**

*Xin Zhang, Minho Jin, Roger Cheng, Ruirui Li, Eunjung Han, Andreas Stolcke*

ICASSP'2022 [[Paper](https://arxiv.org/abs/2202.10672)]

Mixup Framework


* **ProGCL: Rethinking Hard Negative Mining in Graph Contrastive Learning**

*Jun Xia, Lirong Wu, Ge Wang, Jintao Chen, Stan Z.Li*

ICML'2022 [[Paper](https://arxiv.org/abs/2110.02027)]
[[Code](https://github.com/junxia97/ProGCL)]

ProGCL Framework


* **M-Mix: Generating Hard Negatives via Multi-sample Mixing for Contrastive Learning**

*Shaofeng Zhang, Meng Liu, Junchi Yan, Hengrui Zhang, Lingxiao Huang, Pinyan Lu, Xiaokang Yang*

KDD'2022 [[Paper](https://sherrylone.github.io/assets/KDD22_M-Mix.pdf)]
[[Code](https://github.com/Sherrylone/m-mix)]

M-Mix Framework


* **A Simple Data Mixing Prior for Improving Self-Supervised Learning**

*Sucheng Ren, Huiyu Wang, Zhengqi Gao, Shengfeng He, Alan Yuille, Yuyin Zhou, Cihang Xie*

CVPR'2022 [[Paper](https://arxiv.org/abs/2206.07692)]
[[Code](https://github.com/oliverrensu/sdmp)]

SDMP Framework


* **On the Importance of Asymmetry for Siamese Representation Learning**

*Xiao Wang, Haoqi Fan, Yuandong Tian, Daisuke Kihara, Xinlei Chen*

CVPR'2022 [[Paper](https://arxiv.org/abs/2204.00613)]
[[Code](https://github.com/facebookresearch/asym-siam)]

ScaleMix Framework


* **VLMixer: Unpaired Vision-Language Pre-training via Cross-Modal CutMix**

*Teng Wang, Wenhao Jiang, Zhichao Lu, Feng Zheng, Ran Cheng, Chengguo Yin, Ping Luo*

ICML'2022 [[Paper](https://arxiv.org/abs/2206.08919)]

VLMixer Framework


* **CropMix: Sampling a Rich Input Distribution via Multi-Scale Cropping**

*Junlin Han, Lars Petersson, Hongdong Li, Ian Reid*

ArXiv'2022 [[Paper](https://arxiv.org/abs/2205.15955)]
[[Code](https://github.com/JunlinHan/CropMix)]

CropMix Framework


* **i-MAE: Are Latent Representations in Masked Autoencoders Linearly Separable**

*Kevin Zhang, Zhiqiang Shen*

ArXiv'2022 [[Paper](https://arxiv.org/abs/2210.11470)]
[[Code](https://github.com/vision-learning-acceleration-lab/i-mae)]

i-MAE Framework


* **MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers**

*Jihao Liu, Xin Huang, Jinliang Zheng, Yu Liu, Hongsheng Li*

CVPR'2023 [[Paper](https://arxiv.org/abs/2205.13137)]
[[Code](https://github.com/Sense-X/MixMIM)]

MixMAE Framework


* **Mixed Autoencoder for Self-supervised Visual Representation Learning**

*Kai Chen, Zhili Liu, Lanqing Hong, Hang Xu, Zhenguo Li, Dit-Yan Yeung*

CVPR'2023 [[Paper](https://arxiv.org/abs/2303.17152)]

MixedAE Framework


* **Inter-Instance Similarity Modeling for Contrastive Learning**

*Chengchao Shen, Dawei Liu, Hao Tang, Zhe Qu, Jianxin Wang*

ArXiv'2023 [[Paper](https://arxiv.org/abs/2306.12243)]
[[Code](https://github.com/visresearch/patchmix)]

PatchMix Framework


* **Guarding Barlow Twins Against Overfitting with Mixed Samples**

*Wele Gedara Chaminda Bandara, Celso M. De Melo, Vishal M. Patel*

ArXiv'2023 [[Paper](https://arxiv.org/abs/2312.02151)]
[[Code](https://github.com/wgcban/mix-bt)]

PatchMix Framework


(back to top)

## Mixup for Semi-supervised Learning

* **MixMatch: A Holistic Approach to Semi-Supervised Learning**

*David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, Colin Raffel*

NIPS'2019 [[Paper](https://arxiv.org/abs/1905.02249)]
[[Code](https://github.com/google-research/mixmatch)]

MixMatch Framework


* **Patch-level Neighborhood Interpolation: A General and Effective Graph-based Regularization Strategy**

*Ke Sun, Bing Yu, Zhouchen Lin, Zhanxing Zhu*

ArXiv'2019 [[Paper](https://arxiv.org/abs/1911.09307)]

Pani VAT Framework


* **ReMixMatch: Semi-Supervised Learning with Distribution Matching and Augmentation Anchoring**

*David Berthelot, [email protected], Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, Colin Raffel*

ICLR'2020 [[Paper](https://openreview.net/forum?id=HklkeR4KPB)]
[[Code](https://github.com/google-research/remixmatch)]

ReMixMatch Framework


* **DivideMix: Learning with Noisy Labels as Semi-supervised Learning**

*Junnan Li, Richard Socher, Steven C.H. Hoi*

ICLR'2020 [[Paper](https://arxiv.org/abs/2002.07394)]
[[Code](https://github.com/LiJunnan1992/DivideMix)]

DivideMix Framework


* **Epsilon Consistent Mixup: Structural Regularization with an Adaptive Consistency-Interpolation Tradeoff**

*Vincent Pisztora, Yanglan Ou, Xiaolei Huang, Francesca Chiaromonte, Jia Li*

ArXiv'2021 [[Paper](https://arxiv.org/abs/2104.09452)]

Epsilon Consistent Mixup (ϵmu) Framework


* **Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning**

*Yifan Zhang, Bryan Hooi, Dapeng Hu, Jian Liang, Jiashi Feng*

NIPS'2021 [[Paper](https://arxiv.org/abs/2102.06605)]
[[Code](https://github.com/vanint/core-tuning)]

Core-Tuning Framework


* **MUM: Mix Image Tiles and UnMix Feature Tiles for Semi-Supervised Object Detection**

*JongMok Kim, Jooyoung Jang, Seunghyeon Seo, Jisoo Jeong, Jongkeun Na, Nojun Kwak*

CVPR'2022 [[Paper](https://user-images.githubusercontent.com/44519745/225082975-4143e7f5-8873-433c-ab6f-6caa615f7120.png)]
[[Code](https://github.com/jongmokkim/mix-unmix)]

MUM Framework


* **Harnessing Hard Mixed Samples with Decoupled Regularizer**

*Zicheng Liu, Siyuan Li, Ge Wang, Cheng Tan, Lirong Wu, Stan Z. Li*

NIPS'2023 [[Paper](https://arxiv.org/abs/2203.10761)]
[[Code](https://github.com/Westlake-AI/openmixup)]

DFixMatch Framework


* **Manifold DivideMix: A Semi-Supervised Contrastive Learning Framework for Severe Label Noise**

*Fahimeh Fooladgar, Minh Nguyen Nhat To, Parvin Mousavi, Purang Abolmaesumi*

Arxiv'2023 [[Paper](https://arxiv.org/abs/2308.06861)]
[[Code](https://github.com/Fahim-F/ManifoldDivideMix)]

MixEMatch Framework


* **LaserMix for Semi-Supervised LiDAR Semantic Segmentation**

*Lingdong Kong, Jiawei Ren, Liang Pan, Ziwei Liu*

CVPR'2023 [[Paper](https://arxiv.org/abs/2207.00026)]
[[Code](https://github.com/ldkong1205/LaserMix)] [[project](https://ldkong.com/LaserMix)]

LaserMix Framework


* **Dual-Decoder Consistency via Pseudo-Labels Guided Data Augmentation for Semi-Supervised Medical Image Segmentation**

*Yuanbin Chen, Tao Wang, Hui Tang, Longxuan Zhao, Ruige Zong, Tao Tan, Xinlin Zhang, Tong Tong*

ArXiv'2023 [[Paper](https://arxiv.org/abs/2308.16573)]

DCPA Framework


* **Mixed Pseudo Labels for Semi-Supervised Object Detection**

*Zeming Chen, Wenwei Zhang, Xinjiang Wang, Kai Chen, Zhi Wang*

ArXiv'2023 [[Paper](https://arxiv.org/abs/2312.07006)]
[[Code](https://github.com/czm369/mixpl)]

MixPL Framework


* **PCLMix: Weakly Supervised Medical Image Segmentation via Pixel-Level Contrastive Learning and Dynamic Mix Augmentation**

*Yu Lei, Haolun Luo, Lituan Wang, Zhenwei Zhang, Lei Zhang*

ArXiv'2024 [[Paper](https://arxiv.org/abs/2405.06288)]
[[Code](https://github.com/Torpedo2648/PCLMix)]

PCLMix Framework


## Mixup for Regression

* **RegMix: Data Mixing Augmentation for Regression**

*Seong-Hyeon Hwang, Steven Euijong Whang*

ArXiv'2021 [[Paper](https://arxiv.org/abs/2106.03374)]

* **C-Mixup: Improving Generalization in Regression**

*Huaxiu Yao, Yiping Wang, Linjun Zhang, James Zou, Chelsea Finn*

NeurIPS'2022 [[Paper](https://arxiv.org/abs/2210.05775)]
[[Code](https://github.com/huaxiuyao/C-Mixup)]

* **ExtraMix: Extrapolatable Data Augmentation for Regression using Generative Models**

*Kisoo Kwon, Kuhwan Jeong, Sanghyun Park, Sangha Park, Hoshik Lee, Seung-Yeon Kwak, Sungmin Kim, Kyunghyun Cho*

OpenReview'2022 [[Paper](https://openreview.net/forum?id=NgEuFT-SIgI)]

* **Anchor Data Augmentation**

*Nora Schneider, Shirin Goshtasbpour, Fernando Perez-Cruz*

NeurIPS'2023 [[Paper](https://arxiv.org/abs/2311.06965)]

* **Rank-N-Contrast: Learning Continuous Representations for Regression**

*Kaiwen Zha, Peng Cao, Jeany Son, Yuzhe Yang, Dina Katabi*

NeurIPS'2023 [[Paper](https://arxiv.org/abs/2210.01189)]
[[Code](https://github.com/kaiwenzha/Rank-N-Contrast)]

* **Mixup Your Own Pairs**

*Yilei Wu, Zijian Dong, Chongyao Chen, Wangchunshu Zhou, Juan Helen Zhou*

ArXiv'2023 [[Paper](https://arxiv.org/abs/2309.16633)]
[[Code](https://github.com/yilei-wu/supremix)]

SupReMix Framework


* **Tailoring Mixup to Data using Kernel Warping functions**

*Quentin Bouniot, Pavlo Mozharovskyi, Florence d'Alché-Buc*

ArXiv'2023 [[Paper](https://arxiv.org/abs/2311.01434)]
[[Code](https://github.com/ENSTA-U2IS/torch-uncertainty)]

SupReMix Framework


* **OmniMixup: Generalize Mixup with Mixing-Pair Sampling Distribution**

*Anonymous*

Openreview'2023 [[Paper](https://openreview.net/forum?id=6Uc7Fgwrsm)]

* **Augment on Manifold: Mixup Regularization with UMAP**

*Yousef El-Laham, Elizabeth Fons, Dillon Daudert, Svitlana Vyetrenko*

ICASSP'2024 [[Paper](https://arxiv.org/abs/2210.01189)]

## Mixup for Robustness

* **Mixup as directional adversarial training**

*Guillaume P. Archambault, Yongyi Mao, Hongyu Guo, Richong Zhang*

NeurIPS'2019 [[Paper](https://arxiv.org/abs/1906.06875)]
[[Code](https://github.com/mixupAsDirectionalAdversarial/mixup_as_dat)]

* **Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks**

*Tianyu Pang, Kun Xu, Jun Zhu*

ICLR'2020 [[Paper](https://arxiv.org/abs/1909.11515)]
[[Code](https://github.com/P2333/Mixup-Inference)]

* **Addressing Neural Network Robustness with Mixup and Targeted Labeling Adversarial Training**

*Alfred Laugros, Alice Caplier, Matthieu Ospici*

ECCV'2020 [[Paper](https://arxiv.org/abs/2008.08384)]

* **Mixup Training as the Complexity Reduction**

*Masanari Kimura*

OpenReview'2021 [[Paper](https://openreview.net/forum?id=xvWZQtxI7qq)]

* **Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization**

*Saehyung Lee, Hyungyu Lee, Sungroh Yoon*

CVPR'2020 [[Paper](https://arxiv.org/abs/2003.02484)]
[[Code](https://github.com/Saehyung-Lee/cifar10_challenge)]

* **MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps**

*Muhammad Awais, Fengwei Zhou, Chuanlong Xie, Jiawei Li, Sung-Ho Bae, Zhenguo Li*

NeurIPS'2021 [[Paper](https://arxiv.org/abs/2111.05073)]

* **On the benefits of defining vicinal distributions in latent space**

*Puneet Mangla, Vedant Singh, Shreyas Jayant Havaldar, Vineeth N Balasubramanian*

CVPRW'2021 [[Paper](https://arxiv.org/abs/2003.06566)]

## Low-level Vision

* **Robust Image Denoising through Adversarial Frequency Mixup**

*Donghun Ryou, Inju Ha, Hyewon Yoo, Dongwan Kim, Bohyung Han*

CVPR'2024 [[Paper](https://openaccess.thecvf.com/content/CVPR2024/papers/Ryou_Robust_Image_Denoising_through_Adversarial_Frequency_Mixup_CVPR_2024_paper.pdf)]
[[Code](https://github.com/dhryougit/AFM)]

(back to top)

## Mixup for Multi-modality

* **MixGen: A New Multi-Modal Data Augmentation**

*Xiaoshuai Hao, Yi Zhu, Srikar Appalaraju, Aston Zhang, Wanqian Zhang, Bo Li, Mu Li*

arXiv'2023 [[Paper](https://arxiv.org/abs/2206.08358)]
[[Code](https://github.com/amazon-research/mix-generation)]

* **VLMixer: Unpaired Vision-Language Pre-training via Cross-Modal CutMix**

*Teng Wang, Wenhao Jiang, Zhichao Lu, Feng Zheng, Ran Cheng, Chengguo Yin, Ping Luo*

arXiv'2022 [[Paper](https://arxiv.org/abs/2206.08919)]

* **Geodesic Multi-Modal Mixup for Robust Fine-Tuning**

*Changdae Oh, Junhyuk So, Hoyoon Byun, YongTaek Lim, Minchul Shin, Jong-June Jeon, Kyungwoo Song*

NeurIPS'2023 [[Paper](https://arxiv.org/abs/2203.03897)]
[[Code](https://github.com/changdaeoh/multimodal-mixup)]

* **PowMix: A Versatile Regularizer for Multimodal Sentiment Analysis**

*Efthymios Georgiou, Yannis Avrithis, Alexandros Potamianos*

arXiv'2023 [[Paper](https://arxiv.org/abs/2312.12334)]

PowMix Framework


## Analysis of Mixup

* **On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks**

*Sunil Thulasidasan, Gopinath Chennupati, Jeff Bilmes, Tanmoy Bhattacharya, Sarah Michalak*

NeurIPS'2019 [[Paper](https://arxiv.org/abs/1905.11001)]
[[Code](https://github.com/paganpasta/onmixup)]

Framework


* **On Mixup Regularization**

*Luigi Carratino, Moustapha Cissé, Rodolphe Jenatton, Jean-Philippe Vert*

ArXiv'2020 [[Paper](https://arxiv.org/abs/2006.06049)]

Framework


* **How Does Mixup Help With Robustness and Generalization?**

*Linjun Zhang, Zhun Deng, Kenji Kawaguchi, Amirata Ghorbani, James Zou*

ICLR'2021 [[Paper](https://arxiv.org/abs/2010.04819)]

Framework


* **Towards Understanding the Data Dependency of Mixup-style Training**

*Muthu Chidambaram, Xiang Wang, Yuzheng Hu, Chenwei Wu, Rong Ge*

ICLR'2022 [[Paper](https://openreview.net/pdf?id=ieNJYujcGDO)]
[[Code](https://github.com/2014mchidamb/Mixup-Data-Dependency)]

Framework


* **When and How Mixup Improves Calibration**

*Linjun Zhang, Zhun Deng, Kenji Kawaguchi, James Zou*

ICML'2022 [[Paper](https://arxiv.org/abs/2102.06289)]

Framework


* **Over-Training with Mixup May Hurt Generalization**

*Zixuan Liu, Ziqiao Wang, Hongyu Guo, Yongyi Mao*

ICLR'2023 [[Paper](https://openreview.net/forum?id=JmkjrlVE-DG)]

Framework


* **Provable Benefit of Mixup for Finding Optimal Decision Boundaries**

*Junsoo Oh, Chulhee Yun*

ICML'2023 [[Paper](https://chulheeyun.github.io/publication/oh2023provable/)]

* **On the Pitfall of Mixup for Uncertainty Calibration**

*Deng-Bao Wang, Lanqing Li, Peilin Zhao, Pheng-Ann Heng, Min-Ling Zhang*

CVPR'2023 [[Paper](https://openaccess.thecvf.com/content/CVPR2023/papers/Wang_On_the_Pitfall_of_Mixup_for_Uncertainty_Calibration_CVPR_2023_paper.pdf)]

* **Understanding the Role of Mixup in Knowledge Distillation: An Empirical Study**

*Hongjun Choi, Eun Som Jeon, Ankita Shukla, Pavan Turaga*

WACV'2023 [[Paper](https://arxiv.org/abs/2211.03946)]
[[Code](https://github.com/hchoi71/mix-kd)]

* **Analyzing Effects of Mixed Sample Data Augmentation on Model Interpretability**

*Soyoun Won, Sung-Ho Bae, Seong Tae Kim*

arXiv'2023 [[Paper](https://arxiv.org/abs/2303.14608)]

* **Selective Mixup Helps with Distribution Shifts, But Not (Only) because of Mixup**

*Damien Teney, Jindong Wang, Ehsan Abbasnejad*

arXiv'2023 [[Paper](https://arxiv.org/abs/2305.16817)]

(back to top)

## Natural Language Processing

* **Augmenting Data with Mixup for Sentence Classification: An Empirical Study**

*Hongyu Guo, Yongyi Mao, Richong Zhang*

arXiv'2019 [[Paper](https://arxiv.org/abs/1905.08941)]
[[Code](https://github.com/dsfsi/textaugment)]

* **Mixup-Transformer: Dynamic Data Augmentation for NLP Tasks**

*Lichao Sun, Congying Xia, Wenpeng Yin, Tingting Liang, Philip S. Yu, Lifang He*

COLING'2020 [[Paper](https://arxiv.org/abs/2010.02394)]

* **Calibrated Language Model Fine-Tuning for In- and Out-of-Distribution Data**

*Lingkai Kong, Haoming Jiang, Yuchen Zhuang, Jie Lyu, Tuo Zhao, Chao Zhang*

EMNLP'2020 [[Paper](https://arxiv.org/abs/2010.11506)]
[[Code](https://github.com/Lingkai-Kong/Calibrated-BERT-Fine-Tuning)]

* **Augmenting NLP Models using Latent Feature Interpolations**

*Amit Jindal, Arijit Ghosh Chowdhury, Aniket Didolkar, Di Jin, Ramit Sawhney, Rajiv Ratn Shah*

COLING'2020 [[Paper](https://aclanthology.org/2020.coling-main.611/)]

* **MixText: Linguistically-informed Interpolation of Hidden Space for Semi-Supervised Text Classification**

*Jiaao Chen, Zichao Yang, Diyi Yang*

ACL'2020 [[Paper](https://arxiv.org/abs/2004.12239)]
[[Code](https://github.com/GT-SALT/MixText)]

* **TreeMix: Compositional Constituency-based Data Augmentation for Natural Language Understanding**

*Le Zhang, Zichao Yang, Diyi Yang*

NAALC'2022 [[Paper](https://arxiv.org/abs/2205.06153)]
[[Code](https://github.com/magiccircuit/treemix)]

* **STEMM: Self-learning with Speech-text Manifold Mixup for Speech Translation**

*Qingkai Fang, Rong Ye, Lei Li, Yang Feng, Mingxuan Wang*

ACL'2022 [[Paper](https://arxiv.org/abs/2010.02394)]
[[Code](https://github.com/ictnlp/STEMM)]

* **Enhancing Cross-lingual Transfer by Manifold Mixup**

*Huiyun Yang, Huadong Chen, Hao Zhou, Lei Li*

ICLR'2022 [[Paper](https://arxiv.org/abs/2205.04182)]
[[Code](https://github.com/yhy1117/x-mixup)]

## Graph Representation Learning

* **Fused Gromov-Wasserstein Graph Mixup for Graph-level Classifications**

*Xinyu Ma, Xu Chu, Yasha Wang, Yang Lin, Junfeng Zhao, Liantao Ma, Wenwu Zhu*

NeurIPS'2023 [[Paper](https://arxiv.org/abs/2306.15963)]
[[code](https://github.com/ArthurLeoM/FGWMixup)]

* **G-Mixup: Graph Data Augmentation for Graph Classification**

*Xiaotian Han, Zhimeng Jiang, Ninghao Liu, Xia Hu*

NeurIPS'2023 [[Paper](https://arxiv.org/abs/2202.07179)]

(back to top)

## Survey

* **A survey on Image Data Augmentation for Deep Learning**

*Connor Shorten and Taghi Khoshgoftaar*

Journal of Big Data'2019 [[Paper](https://www.researchgate.net/publication/334279066_A_survey_on_Image_Data_Augmentation_for_Deep_Learning)]

* **An overview of mixing augmentation methods and augmentation strategies**

*Dominik Lewy and Jacek Ma ́ndziuk*

Artificial Intelligence Review'2022 [[Paper](https://link.springer.com/article/10.1007/s10462-022-10227-z)]

* **Image Data Augmentation for Deep Learning: A Survey**

*Suorong Yang, Weikang Xiao, Mengcheng Zhang, Suhan Guo, Jian Zhao, Furao Shen*

ArXiv'2022 [[Paper](https://arxiv.org/abs/2204.08610)]

* **A Survey of Mix-based Data Augmentation: Taxonomy, Methods, Applications, and Explainability**

*Chengtai Cao, Fan Zhou, Yurou Dai, Jianping Wang*

ArXiv'2022 [[Paper](https://arxiv.org/abs/2212.10888)]
[[Code](https://github.com/ChengtaiCao/Awesome-Mix)]

* **A Survey of Automated Data Augmentation for Image Classification: Learning to Compose, Mix, and Generate**

*Tsz-Him Cheung, Dit-Yan Yeung*

TNNLS'2023 [[Paper](https://ieeexplore.ieee.org/abstract/document/10158722)]

* **Survey: Image Mixing and Deleting for Data Augmentation**

*Humza Naveed, Saeed Anwar, Munawar Hayat, Kashif Javed, Ajmal Mian*

Engineering Applications of Artificial Intelligence'2024 [[Paper](https://arxiv.org/abs/2106.07085)]

## Benchmark

* **OpenMixup: A Comprehensive Mixup Benchmark for Visual Classification**

*Siyuan Li, Zedong Wang, Zicheng Liu, Di Wu, Cheng Tan, Weiyang Jin, Stan Z. Li*

ArXiv'2022 [[Paper](https://arxiv.org/abs/2209.04851)]
[[Code](https://github.com/Westlake-AI/openmixup)]

(back to top)

## Contribution

Feel free to send [pull requests](https://github.com/Westlake-AI/openmixup/pulls) to add more links with the following Markdown format. Note that the abbreviation, the code link, and the figure link are optional attributes.

```markdown
* **TITLE**

*AUTHER*

PUBLISH'YEAR [[Paper](link)] [[Code](link)]

ABBREVIATION Framework



```

Current contributors include: Siyuan Li ([@Lupin1998](https://github.com/Lupin1998)), Zicheng Liu ([@pone7](https://github.com/pone7)), and Zedong Wang ([@Jacky1128](https://github.com/Jacky1128)). We thank all contributors for `Awesome-Mixup`!

(back to top)

## License

This project is released under the [Apache 2.0 license](LICENSE).

## Acknowledgement

This repository is built using the [OpenMixup](https://github.com/Westlake-AI/openmixup) library and [Awesome README](https://github.com/matiassingers/awesome-readme) repository.

## Related Project

- [OpenMixup](https://github.com/Westlake-AI/openmixup): CAIRI Supervised, Semi- and Self-Supervised Visual Representation Learning Toolbox and Benchmark.
- [Awesome-Mix](https://github.com/ChengtaiCao/Awesome-Mix): An awesome list of papers for `A Survey of Mix-based Data Augmentation: Taxonomy, Methods, Applications, and Explainability, we categorize them based on our proposed taxonomy`.
- [survery-image-mixing-and-deleting-for-data-augmentation](https://github.com/humza909/survery-image-mixing-and-deleting-for-data-augmentation): An awesome list of papers for `Survey: Image Mixing and Deleting for Data Augmentation`.
- [awesome-mixup](https://github.com/demoleiwang/awesome-mixup): A collection of awesome papers about mixup.
- [awesome-mixed-sample-data-augmentation](https://github.com/JasonZhang156/awesome-mixed-sample-data-augmentation): A collection of awesome things about mixed sample data augmentation.
- [data-augmentation-review](https://github.com/AgaMiko/data-augmentation-review): List of useful data augmentation resources.