Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-AML
A curated list of awesome adversarial attack and defense papers
https://github.com/wangjksjtu/awesome-AML
Last synced: 4 days ago
JSON representation
-
Attack
-
White-Box (Gradient-based)
- The Limitations of Deep Learning in Adversarial Settings
- Towards Evaluating the Robustness of Neural Networks
- Adversarial examples in the physical world
- Intriguing properties of neural networks - BFGS)
- DeepFool: a simple and accurate method to fool deep neural networks - Dezfooli et al., 2015. (DeepFool)
- Boosting Adversarial Attacks with Momentum
- EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
- Generating Adversarial Examples with Adversarial Networks
- Explaining and Harnessing Adversarial Examples
- Towards Deep Learning Models Resistant to Adversarial Attacks
-
Black-Box (Gradient-free)
- Practical Black-Box Attacks against Machine Learning
- Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples
- Delving into Transferable Adversarial Examples and Black-box Attacks
- ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models
- Practical Black-box Attacks on Deep NeuralNetworks using Efficient Query Mechanisms
- Prior convictions: Black-box adversarial attacks with bandits and priors - TD)
- Adversarial Risk and the Dangers of Evaluating Against Weak Attacks
- AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks
- GenAttack: GenAttack: Practical Black-box Attacks with Gradient-Free Optimization
- Simple Black-box Adversarial Attacks
- There are No Bit Parts for Sign Bits in Black-Box Attacks - Dujaili et al., 2019. (SignHunter)
- Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial Optimization
- Improving Black-box Adversarial Attacks with a Transfer-based Prior - RGF)
- NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks
- BayesOpt Adversarial Attack
- Black-box Adversarial Attacks with Bayesian Optimization
- Query-efficient Meta Attack to Deep Neural Networks
- Projection & Probability-Driven Black-Box Attack
- Square Attack: a query-efficient black-box adversarial attack via random search
- Switching Gradient Directions for Query-Efficient Black-Box Adversarial Attacks
- Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models
- Query-Efficient Hard-label Black-box Attack:An Optimization-based Approach
- Efficient Decision-based Black-box Adversarial Attacks on Face Recognition
- Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial Attacks
- HopSkipJumpAttack: A Query-Efficient Decision-Based Attack
- QEBA: Query-Efficient Boundary-Based Blackbox Attack
- Black-box Adversarial Attacks with Limited Queries and Information - LO)
-
Robust physical attack
- Robust Physical-World Attacks on Deep Learning Models
- ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector
- Physical Adversarial Examples for Object Detectors
- SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing
- Adversarial Objects Against LiDAR-Based Autonomous Driving Systems
-
Attack across domains
- Universal adversarial perturbations - Dezfooli et al., 2016
- CAAD 2018: Iterative Ensemble Adversarial Attack - PGD, CAAD 2018 5th)
- Synthesizing Robust Adversarial Examples
-
-
Defense
-
Modifying the adversraial examples
- A study of the effect of JPG compression on adversarial images
- Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
- Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG Compression
- Countering Adversarial Images using Input Transformations
- Defending against Adversarial Images using Basis Functions Transformations
-
Modifying the training schemes or models
- Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
- Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning
- Extending Defensive Distillation
- Mitigating Adversarial Effects Through Randomization
- Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
- Towards Robust Neural Networks via Random Self-ensemble
- Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
- Adversarial Logit Pairing
- Curriculum Adversarial Training
- Improved robustness to adversarial examples using Lipschitz regularization of the loss
- Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network
- Feature Denoising for Improving Adversarial Robustness
- Theoretically Principled Trade-off between Robustness and Accuracy
- Defensive Quantization: When Efficiency Meets Robustness
- Unlabeled Data Improves Adversarial Robustness
- Are Labels Required for Improving Adversarial Robustness?
- Adversarially Robust Generalization Just Requires More Unlabeled Data
- Ensemble Adversarial Training: Attacks and Defenses
- Beyond Adversarial Training: Min-Max Optimization in Adversarial Attack and Defense
-
Using other auxiliary tools
- MagNet: a Two-Pronged Defense against Adversarial Examples
- Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser
- Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality
- Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models
- ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples
-