Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-adversarial-examples-dl
A curated list of awesome resources for adversarial examples in deep learning
https://github.com/chbrian/awesome-adversarial-examples-dl
Last synced: 5 days ago
JSON representation
-
Adversarial Examples for Machine Learning
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- The security of machine learning - 148.
- Adversarial classification
- Adversarial learning
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Towards the science of security and privacy in machine learning
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Adversarial classification
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Evasion Attacks against Machine Learning at Test Time
- Can machine learning be secure?
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Adversarial classification
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Adversarial learning
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
- Multiple classifier systems for robust classifier design in adversarial environments - 4 (2010): 27-41.
- Evasion Attacks against Machine Learning at Test Time
- Pattern recognition systems under attack
-
Defenses for Adversarial Examples
-
Network Verification
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Reluplex: An efficient SMT solver for verifying deep neural networks
- Safety verification of deep neural networks
- Towards proving the adversarial robustness of deep neural networks
- Deepsafe: A data-driven approach for checking adversarial robustness in neural networks
- DeepXplore: Automated Whitebox Testing of Deep Learning Systems
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
- Safety verification of deep neural networks
-
Adversarial (Re)Training
-
Adversarial Detecting
- Detecting Adversarial Samples from Artifacts
- Adversarial and Clean Data Are Not Twins - Shinn Ku. arXiv preprint arXiv:1704.04960 (2017).
- Safetynet: Detecting and rejecting adversarial examples robustly
- On the (statistical) detection of adversarial examples
- On detecting adversarial perturbations
- Early Methods for Detecting Adversarial Images
- Dimensionality Reduction as a Defense against Evasion Attacks on Machine Learning Classifiers
- Detecting Adversarial Attacks on Neural Network Policies with Visual Foresight - Chen, et al. arXiv preprint arXiv:1710.00814 (2017).
-
Input Reconstruction
-
Classifier Robustifying
-
Others
-
Network Ditillation
-
-
Applications for Adversarial Examples
-
Malware Detection
- Adversarial examples for malware detection
- DeepDGA: Adversarially-Tuned Domain Generation and Detection
- Adversarial examples for malware detection
- Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN
- Evading Machine Learning Malware Detection
- Automatically evading classifiers
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Automatically evading classifiers
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
- Adversarial examples for malware detection
-
Object Detection
-
Reinforcement Learning
-
Generative Modelling
-
Semantic Segmentation
- Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition
- Adversarial Examples for Semantic Image Segmentation
- Universal Adversarial Perturbations Against Semantic Image Segmentation
- Semantic Image Synthesis via Adversarial Learning
- Adversarial Examples for Semantic Segmentation and Object Detection
-
Scene Text Recognition
-
Reading Comprehension
-
Speech Recognition
-
-
Approaches for Generating Adversarial Examples in Deep Learning
- Intriguing properties of neural networks
- Explaining and harnessing adversarial examples
- Deep neural networks are easily fooled: High confidence predictions for unrecognizable images
- Adversarial examples in the physical world
- Adversarial diversity and hard positive generation
- Adversarial manipulation of deep representations
- Deepfool: a simple and accurate method to fool deep neural networks - Dezfooli, Seyed-Mohsen, Alhussein Fawzi, and Pascal Frossard. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016.
- Universal adversarial perturbations - Dezfooli, Seyed-Mohsen, et al. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017.
- Towards evaluating the robustness of neural networks
- Machine Learning as an Adversarial Service: Learning Black-Box Adversarial Examples
- Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models - Yu, et al. 10th ACM Workshop on Artificial Intelligence and Security (AISEC) with the 24th ACM Conference on Computer and Communications Security (CCS). 2017.
- Ground-Truth Adversarial Examples
- Generating Natural Adversarial Examples
- Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
- Adversarial Attacks and Defences Competition
- The limitations of deep learning in adversarial settings
- The limitations of deep learning in adversarial settings
-
Transferability of Adversarial Examples
-
Analysis of Adversarial Examples
-
Speech Recognition
- Fundamental limits on adversarial robustness
- Exploring the space of adversarial images
- Measuring neural net robustness with constraints
- Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples
- Adversarially Robust Generalization Requires More Data
- Adversarial vulnerability for any classifier
- Adversarial Spheres
- A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples
-
-
Tools
Categories
Sub Categories
Network Verification
51
Malware Detection
49
Speech Recognition
16
Adversarial Detecting
8
Semantic Segmentation
5
Adversarial (Re)Training
5
Input Reconstruction
3
Network Ditillation
2
Reading Comprehension
2
Reinforcement Learning
2
Generative Modelling
2
Classifier Robustifying
2
Others
2
Scene Text Recognition
1
Object Detection
1