https://github.com/nishiwen1214/at_papers
Must-read papers on Adversarial training for neural networks!
https://github.com/nishiwen1214/at_papers
adversarial-training generalization nlp robustness
Last synced: 7 months ago
JSON representation
Must-read papers on Adversarial training for neural networks!
- Host: GitHub
- URL: https://github.com/nishiwen1214/at_papers
- Owner: nishiwen1214
- Created: 2021-10-08T17:24:18.000Z (about 4 years ago)
- Default Branch: main
- Last Pushed: 2021-10-16T12:51:15.000Z (about 4 years ago)
- Last Synced: 2025-01-20T16:27:05.266Z (9 months ago)
- Topics: adversarial-training, generalization, nlp, robustness
- Homepage:
- Size: 93.8 KB
- Stars: 12
- Watchers: 1
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Adversarial training papers

Must-read papers on Adversarial training for neural network models. The paper list is mantained by [Shiwen Ni](https://github.com/nishiwen1214/).
## Contents
- [Adversarial training papers](#adversarial-training-papers)
- [Contents](#contents)
- [Introduction](#introduction)
- [Papers](#papers)
- [Recommended Papers](#recommended-papers)
- [General Paper](#general-paper)## Introduction
This is a paper list about **Adversarial training** for neural network models. Note that the [recommended papers](#recommended-papers) are those that I have read and found to be good.
⭐️ This list is constantly being updated!
## Papers
### Recommended Papers
1. **Explaining and Harnessing Adversarial Examples**, ICLR 2015.   
*Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy* [[pdf](https://arxiv.org/pdf/1412.6572.pdf)], [[code](https://github.com/facebookarchive/adversarial_image_defenses)], **(FGSM)**.
2. **Adversarial Training Methods for Semi-Supervised Text Classification**, ICLR 2017.  
*Takeru Miyato, Andrew M. Dai, Ian Goodfellow* [[pdf](https://arxiv.org/pdf/1605.07725.pdf)], [[code](https://github.com/tensorflow/models)], **(FGM)**.
3. **Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples**, ICML 2018. 
*Anish Athalye, Nicholas Carlini, David Wagner* [[pdf](https://arxiv.org/pdf/1802.00420.pdf)], [[code](https://github.com/anishathalye/obfuscated-gradients)], **(PGD)**.
4. **Adversarial Training for Free!**, NeurIPS 2019.  *Ali Shafahi, Mahyar Najibi, Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S. Davis, Gavin Taylor, Tom Goldstein* [[pdf](https://arxiv.org/pdf/1904.12843.pdf)], [[code](https://github.com/mahyarnajibi/FreeAdversarialTraining)], **(FreeAT)**.
5. **You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle**, NeurIPS 2019.  
*Dinghuai Zhang, Tianyuan Zhang, Yiping Lu, Zhanxing Zhu, Bin Dong* [[pdf](https://arxiv.org/pdf/1905.00877.pdf)], [[code](https://github.com/a1600012888/YOPO-You-Only-Propagate-Once)], **(YOPO)**.
6. **FreeLB: Enhanced Adversarial Training for Natural Language Understanding**, ICLR 2020.   *Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, Jingjing Liu* [[pdf](https://arxiv.org/pdf/1909.11764.pdf)], [[code](https://github.com/zhuchen03/FreeLB)], **(FreeLB)**.
7. **DropAttack: A Masked Weight Adversarial Training Method to Improve Generalization of Neural Networks**, ArXiv 2021.   
*Shiwen Ni, Jiawen Li, Hung-Yu Kao* [[pdf](https://arxiv.org/pdf/2108.12805.pdf)], [[code](https://github.com/nishiwen1214/dropattack)], **(DropAttack)**.
### General Paper1. [**What Doesn't Kill You Makes You Robust(er): Adversarial Training against Poisons and Backdoors**](https://arxiv.org/pdf/2102.13624.pdf)
*Jonas Geiping, Liam Fowl, Gowthami Somepalli, Micah Goldblum, Michael Moeller, Tom Goldstein*, 2021.
2. [**Attacks Which Do Not Kill Training Make Adversarial Learning Stronger**](https://arxiv.org/pdf/2002.11242.pdf)*Jingfeng Zhang, Xilie Xu, Bo Han, Gang Niu, Lizhen Cui, Masashi Sugiyama, Mohan Kankanhalli*, ICML 2020.
3. [**On the Convergence and Robustness of Adversarial Training**](http://proceedings.mlr.press/v97/wang19i/wang19i.pdf)*Yisen Wang, Xingjun Ma, James Bailey, Jinfeng Yi, Bowen Zhou, Quanquan Gu*, ICML 2019.
4. [**Curriculum Adversarial Training**](https://arxiv.org/pdf/1805.04807.pdf)*Yisen Wang, Xingjun Ma, James Bailey, Jinfeng Yi, Bowen Zhou, Quanquan Gu*, IJCAI 2018.
5. [**Rademacher Complexity for Adversarially Robust Generalization**](http://proceedings.mlr.press/v97/yin19b/yin19b.pdf)*Dong Yin, Ramchandran Kannan, Peter Bartlett*, ICML 2019.
6. [**Deep Defense: Training DNNs with Improved Adversarial Robustness**](https://arxiv.org/pdf/1803.00404.pdf)*Ziang Yan, Yiwen Guo, Changshui Zhang*, NeurIPS 2018.
7. [**Single-Step Adversarial Training With Dropout Scheduling**](https://ieeexplore.ieee.org/abstract/document/9157154/authors#authors)
*B. S. Vivek; R. Venkatesh Babu*, CVPR 2020.
8. [**Adversarial Training and Provable Defenses: Bridging the Gap**](https://openreview.net/pdf?id=SJxSDxrKDr)*Mislav Balunovic, Martin Vechev*, ICLR 2020.
9. [**Adversarial Examples: Attacks and Defenses for Deep Learning**](https://ieeexplore.ieee.org/abstract/document/8611298)
*Xiaoyong Yuan; Pan He; Qile Zhu; Xiaolin Li*, TNNLS 2019.
10. [**Reliably fast adversarial training via latent adversarial perturbation**](https://arxiv.org/pdf/2104.01575.pdf)
*Geon Yeong Park, Sang Wan Lee*, ICLR 2021.
11. [**Rumor Detection on Social Media with Hierarchical Adversarial Training**](https://arxiv.org/pdf/2110.00425.pdf)*Shiwen Ni, Jiawen Li, Hung-Yu Kao*, 2022.