Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/chbrian/awesome-adversarial-examples-dl

A curated list of awesome resources for adversarial examples in deep learning
https://github.com/chbrian/awesome-adversarial-examples-dl

List: awesome-adversarial-examples-dl

adversarial-examples computer-vision deep-learning machine-learning security

Last synced: 1 day ago
JSON representation

A curated list of awesome resources for adversarial examples in deep learning

Awesome Lists containing this project

README

        

# Awesome Adversarial Examples for Deep Learning
A list of amazing resources for adversarial examples in deep learning

## Adversarial Examples for Machine Learning
- [The security of machine learning](https://link.springer.com/article/10.1007%2Fs10994-010-5188-5?LI=true) Barreno, Marco, et al. Machine Learning 81.2 (2010): 121-148.
- [Adversarial classification](https://dl.acm.org/citation.cfm?id=1014066) Dalvi, Nilesh, et al. Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2004.
- [Adversarial learning](https://dl.acm.org/citation.cfm?id=1081950) Lowd, Daniel, and Christopher Meek. Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining. ACM, 2005.
- [Multiple classifier systems for robust classifier design in adversarial environments](https://link.springer.com/article/10.1007/s13042-010-0007-7) Biggio, Battista, Giorgio Fumera, and Fabio Roli. International Journal of Machine Learning and Cybernetics 1.1-4 (2010): 27-41.
- [Evasion Attacks against Machine Learning at Test Time](https://link.springer.com/chapter/10.1007/978-3-642-40994-3_25) Biggio, Battista, et al. Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, Berlin, Heidelberg, 2013.
- [Can machine learning be secure?](https://dl.acm.org/citation.cfm?id=1128824) Barreno, Marco, et al. Proceedings of the 2006 ACM Symposium on Information, computer and communications security. ACM, 2006.
- [Towards the science of security and privacy in machine learning](https://arxiv.org/abs/1611.03814) Papernot, Nicolas, et al. arXiv preprint arXiv:1611.03814 (2016).
- [Pattern recognition systems under attack](https://link.springer.com/chapter/10.1007/978-3-642-41822-8_1) Roli, Fabio, Battista Biggio, and Giorgio Fumera. Iberoamerican Congress on Pattern Recognition. Springer, Berlin, Heidelberg, 2013.

## Approaches for Generating Adversarial Examples in Deep Learning
- [Intriguing properties of neural networks](https://arxiv.org/pdf/1312.6199.pdf) Szegedy, Christian, et al. arXiv preprint arXiv:1312.6199 (2013).
- [Explaining and harnessing adversarial examples](https://arxiv.org/abs/1412.6572) Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. arXiv preprint arXiv:1412.6572 (2014).
- [Deep neural networks are easily fooled: High confidence predictions for unrecognizable images](https://www.cv-foundation.org/openaccess/content_cvpr_2015/html/Nguyen_Deep_Neural_Networks_2015_CVPR_paper.html) Nguyen, Anh, Jason Yosinski, and Jeff Clune. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015.
- [Adversarial examples in the physical world](https://arxiv.org/abs/1607.02533) Kurakin, Alexey, Ian Goodfellow, and Samy Bengio. arXiv preprint arXiv:1607.02533 (2016).
- [Adversarial diversity and hard positive generation](https://www.cv-foundation.org/openaccess/content_cvpr_2016_workshops/w12/html/Rozsa_Adversarial_Diversity_and_CVPR_2016_paper.html) Rozsa, Andras, Ethan M. Rudd, and Terrance E. Boult. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2016.
- [The limitations of deep learning in adversarial settings](http://ieeexplore.ieee.org/abstract/document/7467366/) Papernot, Nicolas, et al. Security and Privacy (EuroS&P), 2016 IEEE European Symposium on. IEEE, 2016.
- [Adversarial manipulation of deep representations](https://arxiv.org/abs/1511.05122) Sabour, Sara, et al. ICLR. 2016.
- [Deepfool: a simple and accurate method to fool deep neural networks](https://www.cv-foundation.org/openaccess/content_cvpr_2016/html/Moosavi-Dezfooli_DeepFool_A_Simple_CVPR_2016_paper.html) Moosavi-Dezfooli, Seyed-Mohsen, Alhussein Fawzi, and Pascal Frossard. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016.
- [Universal adversarial perturbations](https://arxiv.org/abs/1610.08401) Moosavi-Dezfooli, Seyed-Mohsen, et al. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017.
- [Towards evaluating the robustness of neural networks](https://arxiv.org/abs/1608.04644) Carlini, Nicholas, and David Wagner. Security and Privacy (S&P). 2017.
- [Machine Learning as an Adversarial Service: Learning Black-Box Adversarial Examples](https://arxiv.org/abs/1708.05207) Hayes, Jamie, and George Danezis. arXiv preprint arXiv:1708.05207 (2017).
- [Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models](https://arxiv.org/abs/1708.03999) Chen, Pin-Yu, et al. 10th ACM Workshop on Artificial Intelligence and Security (AISEC) with the 24th ACM Conference on Computer and Communications Security (CCS). 2017.
- [Ground-Truth Adversarial Examples](https://arxiv.org/abs/1709.10207) Carlini, Nicholas, et al. arXiv preprint arXiv:1709.10207. 2017.
- [Generating Natural Adversarial Examples](https://arxiv.org/abs/1710.11342) Zhao, Zhengli, Dheeru Dua, and Sameer Singh. arXiv preprint arXiv:1710.11342. 2017.
- [Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples](https://arxiv.org/abs/1802.00420) Anish Athalye, Nicholas Carlini, David Wagner. arXiv preprint arXiv:1802.00420. 2018.
- [Adversarial Attacks and Defences Competition](https://arxiv.org/abs/1804.00097) Alexey Kurakin, Ian Goodfellow, Samy Bengio, Yinpeng Dong, Fangzhou Liao, Ming Liang, Tianyu Pang, Jun Zhu, Xiaolin Hu, Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, Alan Yuille, Sangxia Huang, Yao Zhao, Yuzhe Zhao, Zhonglin Han, Junjiajia Long, Yerkebulan Berdibekov, Takuya Akiba, Seiya Tokui, Motoki Abe. arXiv preprint arXiv:1802.00420. 2018.

## Defenses for Adversarial Examples
### Network Ditillation
- [Distillation as a defense to adversarial perturbations against deep neural networks](http://ieeexplore.ieee.org/abstract/document/7546524/) Papernot, Nicolas, et al.Security and Privacy (SP), 2016 IEEE Symposium on. IEEE, 2016.

### Adversarial (Re)Training
- [Learning with a strong adversary](https://arxiv.org/abs/1511.03034) Huang, Ruitong, et al. arXiv preprint arXiv:1511.03034 (2015).
- [Adversarial machine learning at scale](https://arxiv.org/abs/1611.01236) Kurakin, Alexey, Ian Goodfellow, and Samy Bengio. ICLR. 2017.
- [Ensemble Adversarial Training: Attacks and Defenses](https://arxiv.org/abs/1705.07204) Tramèr, Florian, et al. arXiv preprint arXiv:1705.07204 (2017).
- [Adversarial training for relation extraction](http://www.aclweb.org/anthology/D17-1187) Wu, Yi, David Bamman, and Stuart Russell. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. 2017.
- [Adversarial Logit Pairing](https://arxiv.org/abs/1803.06373) Harini Kannan, Alexey Kurakin, Ian Goodfellow. arXiv preprint arXiv:1803.06373 (2018).

### Adversarial Detecting
- [Detecting Adversarial Samples from Artifacts](https://arxiv.org/abs/1703.00410) Feinman, Reuben, et al. arXiv preprint arXiv:1703.00410 (2017).
- [Adversarial and Clean Data Are Not Twins](https://arxiv.org/abs/1704.04960) Gong, Zhitao, Wenlu Wang, and Wei-Shinn Ku. arXiv preprint arXiv:1704.04960 (2017).
- [Safetynet: Detecting and rejecting adversarial examples robustly](https://arxiv.org/abs/1704.00103) Lu, Jiajun, Theerasit Issaranon, and David Forsyth. ICCV (2017).
- [On the (statistical) detection of adversarial examples](https://arxiv.org/abs/1702.06280) Grosse, Kathrin, et al. arXiv preprint arXiv:1702.06280 (2017).
- [On detecting adversarial perturbations](https://arxiv.org/abs/1702.04267) Metzen, Jan Hendrik, et al. ICLR Poster. 2017.
- [Early Methods for Detecting Adversarial Images](https://openreview.net/forum?id=B1dexpDug&noteId=B1dexpDug) Hendrycks, Dan, and Kevin Gimpel. ICLR Workshop (2017).
- [Dimensionality Reduction as a Defense against Evasion Attacks on Machine Learning Classifiers](https://arxiv.org/abs/1704.02654) Bhagoji, Arjun Nitin, Daniel Cullina, and Prateek Mittal. arXiv preprint arXiv:1704.02654 (2017).
- [Detecting Adversarial Attacks on Neural Network Policies with Visual Foresight](https://arxiv.org/abs/1710.00814) Lin, Yen-Chen, et al. arXiv preprint arXiv:1710.00814 (2017).
- [PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples](https://arxiv.org/abs/1710.10766) Song, Yang, et al. arXiv preprint arXiv:1710.10766 (2017).

### Input Reconstruction
- [PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples](https://arxiv.org/abs/1710.10766) Song, Yang, et al. arXiv preprint arXiv:1710.10766 (2017).
- [MagNet: a Two-Pronged Defense against Adversarial Examples](https://arxiv.org/abs/1705.09064) Meng, Dongyu, and Hao Chen. CCS (2017).
- [Towards deep neural network architectures robust to adversarial examples](https://arxiv.org/abs/1412.5068) Gu, Shixiang, and Luca Rigazio. arXiv preprint arXiv:1412.5068 (2014).

### Classifier Robustifying
- [Adversarial Examples, Uncertainty, and Transfer Testing Robustness in Gaussian Process Hybrid Deep Networks](https://arxiv.org/abs/1707.02476) Bradshaw, John, Alexander G. de G. Matthews, and Zoubin Ghahramani.arXiv preprint arXiv:1707.02476 (2017).
- [Robustness to Adversarial Examples through an Ensemble of Specialists](https://arxiv.org/abs/1702.06856) Abbasi, Mahdieh, and Christian Gagné. arXiv preprint arXiv:1702.06856 (2017).

### Network Verification
- [Reluplex: An efficient SMT solver for verifying deep neural networks](https://arxiv.org/abs/1702.01135) Katz, Guy, et al. CAV 2017.
- [Safety verification of deep neural networks](https://link.springer.com/chapter/10.1007/978-3-319-63387-9_1) Huang, Xiaowei, et al. International Conference on Computer Aided Verification. Springer, Cham, 2017.
- [Towards proving the adversarial robustness of deep neural networks](https://arxiv.org/abs/1709.02802) Katz, Guy, et al. arXiv preprint arXiv:1709.02802 (2017).
- [Deepsafe: A data-driven approach for checking adversarial robustness in neural networks](https://arxiv.org/abs/1710.00486) Gopinath, Divya, et al. arXiv preprint arXiv:1710.00486 (2017).
- [DeepXplore: Automated Whitebox Testing of Deep Learning Systems](https://arxiv.org/abs/1705.06640) Pei, Kexin, et al. arXiv preprint arXiv:1705.06640 (2017).

### Others
- [Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong](https://arxiv.org/abs/1706.04701) He, Warren, et al. 11th USENIX Workshop on Offensive Technologies (WOOT 17). (2017).
- [Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods](https://arxiv.org/abs/1705.07263) Carlini, Nicholas, and David Wagner. AISec. 2017.

## Applications for Adversarial Examples
### Reinforcement Learning
- [Adversarial attacks on neural network policies](https://arxiv.org/abs/1702.02284) Huang, Sandy, et al. arXiv preprint arXiv:1702.02284 (2017).
- [Delving into adversarial attacks on deep policies](https://arxiv.org/abs/1705.06452) Kos, Jernej, and Dawn Song. ICLR Workshop. 2017.

### Generative Modelling
- [Adversarial examples for generative models](https://arxiv.org/abs/1702.06832) Kos, Jernej, Ian Fischer, and Dawn Song. arXiv preprint arXiv:1702.06832 (2017).
- [Adversarial images for variational autoencoders](https://arxiv.org/abs/1612.00155) Tabacof, Pedro, Julia Tavares, and Eduardo Valle. Workshop on Adversarial Training, NIPS. 2016

### Semantic Segmentation
- [Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition](https://dl.acm.org/citation.cfm?id=2978392) Sharif, Mahmood, et al. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2016.
- [Adversarial Examples for Semantic Segmentation and Object Detection](https://arxiv.org/abs/1703.08603) Xie, Cihang, et al. arXiv preprint arXiv:1703.08603 (2017).
- [Adversarial Examples for Semantic Image Segmentation](https://arxiv.org/abs/1703.01101) Fischer, Volker, et al. ICLR workshop. 2017.
- [Universal Adversarial Perturbations Against Semantic Image Segmentation](http://openaccess.thecvf.com/content_iccv_2017/html/Metzen_Universal_Adversarial_Perturbations_ICCV_2017_paper.html) Hendrik Metzen, Jan, et al. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017.
- [Semantic Image Synthesis via Adversarial Learning](https://arxiv.org/abs/1707.06873) Dong, Hao, et al. ICCV(2017).

### Object Detection
- [Adversarial Examples for Semantic Segmentation and Object Detection](https://arxiv.org/abs/1703.08603) Xie, Cihang, et al. arXiv preprint arXiv:1703.08603 (2017).
- [Physical Adversarial Examples for Object Detectors](https://arxiv.org/abs/1807.07769) Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramer, Atul Prakash, Tadayoshi Kohno, Dawn Song. arXiv preprint arXiv:1807.07769 (2018).

### Scene Text Recognition
- [Adaptive Adversarial Attack on Scene Text Recognition](https://arxiv.org/abs/1807.03326) Xiaoyong Yuan, Pan He, Xiaolin Andy Li. arXiv preprint arXiv:1807.03326 (2018).

### Reading Comprehension
- [Adversarial examples for evaluating reading comprehension systems](https://arxiv.org/abs/1707.07328) Jia, Robin, and Percy Liang. EMNLP. 2017.
- [Understanding Neural Networks through Representation Erasure](https://arxiv.org/abs/1612.08220) Li, Jiwei, Will Monroe, and Dan Jurafsky. arXiv preprint arXiv:1612.08220 (2016).

### Malware Detection
- [Adversarial examples for malware detection](https://link.springer.com/chapter/10.1007/978-3-319-66399-9_4) Grosse, Kathrin, et al. European Symposium on Research in Computer Security. Springer, Cham, 2017.
- [Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN](https://arxiv.org/abs/1702.05983) Hu, Weiwei, and Ying Tan. arXiv preprint arXiv:1702.05983 (2017).
- [Evading Machine Learning Malware Detection](https://www.blackhat.com/docs/us-17/thursday/us-17-Anderson-Bot-Vs-Bot-Evading-Machine-Learning-Malware-Detection-wp.pdf) Anderson, Hyrum S., et al. Black Hat. 2017.
- [DeepDGA: Adversarially-Tuned Domain Generation and Detection](https://dl.acm.org/citation.cfm?id=2996767) Anderson, Hyrum S., Jonathan Woodbridge, and Bobby Filar. Proceedings of the 2016 ACM Workshop on Artificial Intelligence and Security. ACM, 2016.
- [Automatically evading classifiers](http://www.cs.virginia.edu/yanjun/paperA14/2016-evade_classifier.pdf) Xu, Weilin, Yanjun Qi, and David Evans. Proceedings of the 2016 Network and Distributed Systems Symposium. 2016.

### Speech Recognition
- [Targeted Adversarial Examples for Black Box Audio Systems](https://arxiv.org/abs/1805.07820) Rohan Taori, Amog Kamsetty, Brenton Chu, Nikita Vemuri. arXiv preprint arXiv:1805.07820 (2018).
- [CommanderSong: A Systematic Approach for Practical Adversarial Voice Recognition](https://arxiv.org/abs/1801.08535) Xuejing Yuan, Yuxuan Chen, Yue Zhao, Yunhui Long, Xiaokang Liu, Kai Chen, Shengzhi Zhang, Heqing Huang, Xiaofeng Wang, Carl A. Gunter. USENIX Security. 2018.
- [Audio Adversarial Examples: Targeted Attacks on Speech-to-Text](https://arxiv.org/abs/1801.01944) Nicholas Carlini, David Wagner. Deep Learning and Security Workshop, 2018.

## Transferability of Adversarial Examples
- [Transferability in machine learning: from phenomena to black-box attacks using adversarial samples](https://arxiv.org/abs/1605.07277) Papernot, Nicolas, Patrick McDaniel, and Ian Goodfellow. arXiv preprint arXiv:1605.07277 (2016).
- [Delving into transferable adversarial examples and black-box attacks](https://arxiv.org/abs/1611.02770) Liu, Yanpei, et al. ICLR 2017.

## Analysis of Adversarial Examples
- [Fundamental limits on adversarial robustness](https://lts4.epfl.ch/files/content/sites/lts4/files/frossard/publications/pdfs/icml2015a.pdf) Fawzi, Alhussein, Omar Fawzi, and Pascal Frossard. Proc. ICML, Workshop on Deep Learning. 2015.
- [Exploring the space of adversarial images](http://ieeexplore.ieee.org/abstract/document/7727230/) Tabacof, Pedro, and Eduardo Valle. Neural Networks (IJCNN), 2016 International Joint Conference on. IEEE, 2016.
- [A boundary tilting perspective on the phenomenon of adversarial examples](https://arxiv.org/abs/1608.07690) Tanay, Thomas, and Lewis Griffin. arXiv preprint arXiv:1608.07690 (2016).
- [Measuring neural net robustness with constraints](http://papers.nips.cc/paper/6339-measuring-neural-net-robustness-with-constraints) Bastani, Osbert, et al. Advances in Neural Information Processing Systems. 2016.
- [Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples](https://arxiv.org/abs/1708.05493) Yinpeng Dong, Hang Su, Jun Zhu, Fan Bao. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
- [Adversarially Robust Generalization Requires More Data](https://arxiv.org/abs/1804.11285) Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, Aleksander Mądry. arXiv preprint arXiv:1804.11285. 2018.
- [A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples](https://arxiv.org/abs/1608.07690) Thomas Tanay, Lewis Griffin. arXiv preprint arXiv:1608.07690. 2018.
- [Adversarial vulnerability for any classifier](https://arxiv.org/abs/1802.08686) Alhussein Fawzi, Hamza Fawzi, Omar Fawzi. arXiv preprint arXiv:1802.08686. 2018.
- [Adversarial Spheres](https://arxiv.org/abs/1801.02774) Justin Gilmer, Luke Metz, Fartash Faghri, Samuel S. Schoenholz, Maithra Raghu, Martin Wattenberg, Ian Goodfellow. ICLR. 2018.

## Tools
- [cleverhans v2.0.0: an adversarial machine learning library](https://arxiv.org/abs/1610.00768) Papernot, Nicolas, et al. arXiv preprint arXiv:1610.00768 (2017).
- [Foolbox: A Python toolbox to benchmark the robustness of machine learning models](https://arxiv.org/abs/1707.04131) Jonas Rauber, Wieland Brendel, Matthias Bethge. arXiv preprint arXiv:1707.04131 (2017). [Documentation](http://foolbox.readthedocs.io) [Code](https://github.com/bethgelab/foolbox)
- [advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch](https://arxiv.org/abs/1902.07623) Gavin Weiguang Ding, Luyu Wang, Xiaomeng Jin arXiv:1902.07623 (2019) [github repo](https://github.com/BorealisAI/advertorch)

## Cite this work
If you find this list useful for academic research, we would appreciate citations:
~~~~
@article{yuan2017adversarial,
title={Adversarial Examples: Attacks and Defenses for Deep Learning},
author={Xiaoyong Yuan, Pan He, Qile Zhu, Xiaolin Li},
journal={arXiv preprint arXiv:1712.07107},
year={2017}
}
~~~~

We will update recent studies in the list