Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-ml-privacy-attacks
An awesome list of papers on privacy attacks against machine learning
https://github.com/stratosphereips/awesome-ml-privacy-attacks
Last synced: about 14 hours ago
JSON representation
-
Model extraction
- **Model Extraction Oriented Data Publishing with k-anonymity**
- **Model Extraction Oriented Data Publishing with k-anonymity**
- **Stealing machine learning models via prediction apis** - ML))
- **Stealing hyperparameters in machine learning**
- **Copycat CNN: Stealing Knowledge by Persuading Confession with Random Non-Labeled Data** - Silva et al., 2018) ([code](https://github.com/jeiks/Stealing_DL_Models))
- **Towards reverse-engineering black-box neural networks.**
- **Knockoff nets: Stealing functionality of black-box models**
- **PRADA: protecting against DNN model stealing attacks** - protecting-against-dnn-model-stealing-attacks))
- **Exploring connections between active learning and model extraction**
- **High Accuracy and High Fidelity Extraction of Neural Networks**
- **Thieves on Sesame Street! Model Extraction of BERT-based APIs** - research/language/tree/master/language/bert_extraction))
- **Cryptanalytic Extraction of Neural Network Models**
- **CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples**
- **ACTIVETHIEF: Model Extraction Using Active Learning and Unannotated Public Data**
- **Efficiently Stealing your Machine Learning Models**
- **Extraction of Complex DNN Models: Real Threat or Boogeyman?**
- **Stealing Neural Networks via Timing Side Channels**
- **CSI NN: Reverse Engineering of Neural Network Architectures Through Electromagnetic Side Channel**
- **Cache Telepathy: Leveraging Shared Resource Attacks to Learn DNN Architectures**
- **How to 0wn NAS in Your Spare Time** - Hong/How-to-0wn-NAS-in-Your-Spare-Time))
- **Security Analysis of Deep Neural Networks Operating in the Presence of Cache Side-Channel Attacks**
- **Reverse-Engineering Deep ReLU Networks**
- **Model Extraction Oriented Data Publishing with k-anonymity**
- **Hermes Attack: Steal DNN Models with Lossless Inference Accuracy**
- **Model extraction from counterfactual explanations**
- **MetaSimulator: Simulating Unknown Target Models for Query-Efficient Black-box Attacks**
- **Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks** - poisoning))
- **IReEn: Iterative Reverse-Engineering of Black-Box Functions via Neural Program Synthesis**
- **ES Attack: Model Stealing against Deep Neural Networks without Data Hurdles**
- **Black-Box Ripper: Copying black-box models using generative evolutionary algorithms** - box-ripper))
- **Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization**
- **Model Extraction Attacks and Defenses on Cloud-Based Machine Learning Models**
- **Leveraging Extracted Model Adversaries for Improved Black Box Attacks**
- **Differentially Private Machine Learning Model against Model Extraction Attack**
- **Model Extraction Attacks and Defenses on Cloud-Based Machine Learning Models**
- **Stealing Neural Network Models through the Scan Chain: A New Threat for ML Hardware**
- **Model Extraction and Defenses on Generative Adversarial Networks**
- **Protecting Decision Boundary of Machine Learning Model With Differentially Private Perturbation**
- **Special-Purpose Model Extraction Attacks: Stealing Coarse Model with Fewer Queries**
- **Model Extraction and Adversarial Transferability, Your BERT is Vulnerable!**
- **Thief, Beware of What Get You There: Towards Understanding Model Extraction Attack**
- **Model Weight Theft With Just Noise Inputs: The Curious Case of the Petulant Attacker**
- **Protecting DNNs from Theft using an Ensemble of Diverse Models**
- **Information Laundering for Model Privacy**
- **Deep Neural Network Fingerprinting by Conferrable Adversarial Examples**
- **BODAME: Bilevel Optimization for Defense Against Model Extraction**
- **Dataset Inference: Ownership Resolution in Machine Learning**
- **Good Artists Copy, Great Artists Steal: Model Extraction Attacks Against Image Translation Generative Adversarial Networks**
- **Towards Characterizing Model Extraction Queries and How to Detect Them**
- **Hardness of Samples Is All You Need: Protecting Deep Learning Models Using Hardness of Samples**
- **Stateful Detection of Model Extraction Attacks**
- **MEGEX: Data-Free Model Extraction Attack against Gradient-Based Explainable AI**
- **INVERSENET: Augmenting Model Extraction Attacks with Training Data Inversion**
- **Increasing the Cost of Model Extraction with Calibrated Proof of Work** - lab/model-extraction-iclr)
- **On the Difficulty of Defending Self-Supervised Learning against Model Extraction** - lab/ssl-attacks-defenses)
- **Dataset Inference for Self-Supervised Models** - lab/DatasetInferenceForSelfSupervisedModels)
- **Can't Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders**
- **StolenEncoder: Stealing Pre-trained Encoders**
- **Model Extraction Attacks Revisited**
- **Prompts Should not be Seen as Secrets: Systematically Measuring Prompt Extraction Attack Success**
- **Amnesiac Machine Learning**
- **Toward Robustness and Privacy in Federated Learning: Experimenting with Local and Central Differential Privacy**
- **Analyzing Information Leakage of Updates to Natural Language Models**
- **Estimating g-Leakage via Machine Learning**
- **Information Leakage in Embedding Models**
- **Hide-and-Seek Privacy Challenge**
- **Synthetic Data -- Anonymisation Groundhog Day** - epfl/synthetic_data_release))
- **Robust Membership Encoding: Inference Attacks and CopyrightProtection for Deep Learning**
- **Quantifying Privacy Leakage in Graph Embedding**
- **Quantifying and Mitigating Privacy Risks of Contrastive Learning**
- **Coded Machine Unlearning**
- **Unlearnable Examples: Making Personal Data Unexploitable**
- **Measuring Data Leakage in Machine-Learning Models with Fisher Information**
- **Teacher Model Fingerprinting Attacks Against Transfer Learning**
- **Bounding Information Leakage in Machine Learning**
- **RoFL: Attestable Robustness for Secure Federated Learning**
- **Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash**
- **The Privacy Onion Effect: Memorization is Relative**
- **Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets**
- **Model Extraction Oriented Data Publishing with k-anonymity**
- **Model Extraction Oriented Data Publishing with k-anonymity**
- **Model Extraction Oriented Data Publishing with k-anonymity**
- **Model Extraction Oriented Data Publishing with k-anonymity**
- **Model Extraction Oriented Data Publishing with k-anonymity**
- **Model Extraction Oriented Data Publishing with k-anonymity**
- **Model Extraction Oriented Data Publishing with k-anonymity**
- **Model Extraction Oriented Data Publishing with k-anonymity**
- **Model Extraction Oriented Data Publishing with k-anonymity**
- **Model Extraction Oriented Data Publishing with k-anonymity**
- **Model Extraction Oriented Data Publishing with k-anonymity**
- **Model Extraction Oriented Data Publishing with k-anonymity**
- **Model Extraction Oriented Data Publishing with k-anonymity**
- **LCANets++: Robust Audio Classification using Multi-layer Neural Networks with Lateral Competition**
- **Model Extraction Oriented Data Publishing with k-anonymity**
- **Model Extraction Oriented Data Publishing with k-anonymity**
- **Model Extraction Oriented Data Publishing with k-anonymity**
- **Model Extraction Oriented Data Publishing with k-anonymity**
- **Model Extraction Oriented Data Publishing with k-anonymity**
- **Model Extraction Oriented Data Publishing with k-anonymity**
- **Model Extraction Oriented Data Publishing with k-anonymity**
- **Model Extraction Oriented Data Publishing with k-anonymity**
- **Model Extraction Oriented Data Publishing with k-anonymity**
- **Model Extraction Oriented Data Publishing with k-anonymity**
- **Model Reconstruction from Model Explanations**
-
Membership inference
- **Membership Inference Attacks on Sequence-to-Sequence Models: Is My Data In Your Machine Translation System?**
- **Understanding membership inferences on well-generalized learning models**
- **Revisiting Membership Inference Under Realistic Assumptions**
- **Label-Only Membership Inference Attacks**
- **Label-Leaks: Membership Inference Attack with Label**
- **Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning.**
- **Logan: Membership inference attacks against generative models.**
- **White-box vs Black-box: Bayes Optimal Strategies for Membership Inference**
- **Systematic Evaluation of Privacy Risks of Machine Learning Models** - group/membership-inference-evaluation))
- **Towards the Infeasibility of Membership Inference on Deep Models** - Attack))
- **Stolen Memories: Leveraging Model Memorization for Calibrated White-Box Membership Inference**
- **Sampling Attacks: Amplification of Membership Inference Attacks by Repeated Queries**
- **Segmentations-Leak: Membership Inference Attacks and Defenses in Semantic Image Segmentation**
- **Quantifying Membership Inference Vulnerability via Generalization Gap and Other Model Metrics**
- **Differentially Private Learning Does Not Bound Membership Inference**
- **Quantifying Membership Privacy via Information Leakage**
- **An Extension of Fano's Inequality for Characterizing Model Susceptibility to Membership Inference Attacks**
- **Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning**
- **Node-Level Membership Inference Attacks Against Graph Neural Networks**
- **Enhanced Membership Inference Attacks against Machine Learning Models**
- **Do Not Trust Prediction Scores for Membership Inference Attacks**
- **Membership Inference via Backdooring**
- **Membership inference attacks against machine learning models** - inference))
- **Understanding membership inferences on well-generalized learning models**
- **Privacy risk in machine learning: Analyzing the connection to overfitting** - yeom/ml-privacy-csf18))
- **Membership inference attack against differentially private deep learning model**
- **Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning.**
- **Evaluating differentially private machine learning in practice**
- **Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models** - Leaks))
- **White-box vs Black-box: Bayes Optimal Strategies for Membership Inference**
- **Privacy risks of explaining machine learning models**
- **Demystifying membership inference attacks in machine learning as a service**
- **Monte carlo and reconstruction membership inference attacks against generative models**
- **MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples**
- **Gan-leaks: A taxonomy of membership inference attacks against gans**
- **Membership Inference Attacks on Sequence-to-Sequence Models: Is My Data In Your Machine Translation System?**
- **Revisiting Membership Inference Under Realistic Assumptions**
- **When Machine Unlearning Jeopardizes Privacy**
- **Modelling and Quantifying Membership Information Leakage in Machine Learning**
- **Systematic Evaluation of Privacy Risks of Machine Learning Models** - group/membership-inference-evaluation))
- **Towards the Infeasibility of Membership Inference on Deep Models** - Attack))
- **Alleviating Privacy Attacks via Causal Learning**
- **On the Effectiveness of Regularization Against Membership Inference Attacks**
- **Sampling Attacks: Amplification of Membership Inference Attacks by Repeated Queries**
- **Differential Privacy Defenses and Sampling Attacks for Membership Inference**
- **privGAN: Protecting GANs from membership inference attacks at low cost**
- **Sharing Models or Coresets: A Study based on Membership Inference Attack**
- **Privacy Analysis of Deep Learning in the Wild: Membership Inference Attacks against Transfer Learning**
- **Quantifying Membership Inference Vulnerability via Generalization Gap and Other Model Metrics**
- **MACE: A Flexible Framework for Membership Privacy Estimation in Generative Models**
- **On Primes, Log-Loss Scores and (No) Privacy**
- **MCMIA: Model Compression Against Membership Inference Attack in Deep Neural Networks**
- **Bootstrap Aggregation for Point-based Generalized Membership Inference Attacks**
- **Differentially Private Learning Does Not Bound Membership Inference**
- **Quantifying Membership Privacy via Information Leakage**
- **Disparate Vulnerability: on the Unfairness of Privacy Attacks Against Machine Learning**
- **Use the Spear as a Shield: A Novel Adversarial Example based Privacy-Preserving Technique against Membership Inference Attacks**
- **Unexpected Information Leakage of Differential Privacy Due to Linear Property of Queries**
- **TransMIA: Membership Inference Attacks Using Transfer Shadow Training**
- **An Extension of Fano's Inequality for Characterizing Model Susceptibility to Membership Inference Attacks**
- **Membership Inference Attack with Multi-Grade Service Models in Edge Intelligence**
- **Reconstruction-Based Membership Inference Attacks are Easier on Difficult Problems**
- **Membership Inference Attacks on Deep Regression Models for Neuroimaging**
- **Node-Level Membership Inference Attacks Against Graph Neural Networks**
- **Practical Blind Membership Inference Attack via Differential Comparisons**
- **ADePT: Auto-encoder based Differentially Private Text Transformation**
- **Source Inference Attacks in Federated Learning** - inference-FL))
- **The Influence of Dropout on Membership Inference in Differentially Private Models**
- **Membership Inference Attack Susceptibility of Clinical Language Models**
- **Membership Inference Attacks on Knowledge Graphs**
- **When Does Data Augmentation Help With Membership Inference Attacks?**
- **The Influence of Training Parameters and Architectural Choices on the Vulnerability of Neural Networks to Membership Inference Attacks**
- **Membership Inference on Word Embedding and Beyond**
- **TableGAN-MCA: Evaluating Membership Collisions of GAN-Synthesized Tabular Data Releasing**
- **Enhanced Membership Inference Attacks against Machine Learning Models**
- **Do Not Trust Prediction Scores for Membership Inference Attacks**
- **Membership Inference via Backdooring**
- this repository
- **Membership Inference Attack with Multi-Grade Service Models in Edge Intelligence**
- **Privacy risks of securing machine learning models against adversarial examples** - group/privacy-vs-robustness))
- **Modelling and Quantifying Membership Information Leakage in Machine Learning**
- **Towards the Infeasibility of Membership Inference on Deep Models** - Attack))
- **Towards Realistic Membership Inferences: The Case of Survey Data**
- **When Machine Unlearning Jeopardizes Privacy**
-
Reconstruction
- **iDLG: Improved Deep Leakage from Gradients** - Deep-Leakage-from-Gradients))
- **Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing**
- **A methodology for formalizing model-inversion attacks**
- **Model inversion attacks for prediction systems: Without knowledge of non-sensitive attributes**
- **The secret sharer: Evaluating and testing unintended memorization in neural networks**
- **Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing**
- **A methodology for formalizing model-inversion attacks**
- **Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning**
- **Privacy Risks of General-Purpose Language Models**
- **Inverting Gradients - How easy is it to break privacy in federated learning?**
- **I Know What You See: Power Side-Channel Attack on Convolutional Neural Network Accelerators**
- **Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning**
- **Evaluation Indicator for Model Inversion Attack**
- **Understanding Unintended Memorization in Federated Learning**
- **An Attack-Based Evaluation Method for Differentially Private Learning Against Model Inversion Attack**
- **Privacy Risks of General-Purpose Language Models**
- **The secret revealer: generative model-inversion attacks against deep neural networks**
- **Inverting Gradients - How easy is it to break privacy in federated learning?**
- **GAMIN: An Adversarial Approach to Black-Box Model Inversion**
- **Trade-offs and Guarantees of Adversarial Representation Learning for Information Obfuscation**
- **Reconstruction of training samples from loss functions**
- **A Framework for Evaluating Gradient Leakage Attacks in Federated Learning**
- **Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning**
- **Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning**
- **Illuminating the Dark or how to recover what should not be seen in FE-based classifiers**
- **Robust Transparency Against Model Inversion Attacks**
- **Reducing Risk of Model Inversion Using Privacy-Guided Training**
- **Robust Transparency Against Model Inversion Attacks**
- **Does AI Remember? Neural Networks and the Right to be Forgotten**
- **Does AI Remember? Neural Networks and the Right to be Forgotten**
- **Improving Robustness to Model Inversion Attacks via Mutual Information Regularization**
- **SAPAG: A Self-Adaptive Privacy Attack From Gradients**
- **Theory-Oriented Deep Leakage from Gradients via Linear Equation Solver**
- **Improved Techniques for Model Inversion Attacks**
- **Black-box Model Inversion Attribute Inference Attacks on Classification Models**
- **Deep Face Recognizer Privacy Attack: Model Inversion Initialization by a Deep Generative Adversarial Data Space Discriminator**
- **MixCon: Adjusting the Separability of Data Representations for Harder Data Recovery**
- **Evaluation of Inference Attack Models for Deep Learning on Medical Data**
- **FaceLeaks: Inference Attacks against Transfer Learning Models via Black-box Queries**
- **Extracting Training Data from Large Language Models**
- **MIDAS: Model Inversion Defenses Using an Approximate Memory System**
- **KART: Privacy Leakage Framework of Language Models Pre-trained with Clinical Records**
- **Derivation of Constraints from Machine Learning Models and Applications to Security and Privacy**
- **On the (In)Feasibility of Attribute Inference Attacks on Machine Learning Models**
- **Practical Defences Against Model Inversion Attacks for Split Neural Networks**
- **R-GAP: Recursive Gradient Attack on Privacy**
- **Exploiting Explanations for Model Inversion Attacks**
- **SAFELearn: Secure Aggregation for private FEderated Learning**
- **Does BERT Pretrained on Clinical Notes Reveal Sensitive Data?**
- **Training Data Leakage Analysis in Language Models**
- **Model Fragmentation, Shuffle and Aggregation to Mitigate Model Inversion in Federated Learning**
- **PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage**
- **On the Importance of Encrypting Deep Features**
- **Defending Against Model Inversion Attack by Adversarial Examples**
- **See through Gradients: Image Batch Recovery via GradInversion**
- **Variational Model Inversion Attacks**
- **Reconstructing Training Data with Informed Adversaries**
- **Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks**
- **Privacy Vulnerability of Split Computing to Data-Free Model Inversion Attacks**
- **A Linear Reconstruction Approach for Attribute Inference Attacks against Synthetic Data**
- **Analysis and Utilization of Hidden Information in Model Inversion Attacks** - MIA))
- **Text Embeddings Reveal (Almost) As Much As Text**
- **On the Inadequacy of Similarity-based Privacy Metrics: Reconstruction Attacks against "Truly Anonymous Synthetic Data"**
- **Model Inversion Attack with Least Information and an In-depth Analysis of its Disparate Vulnerability**
- **Privacy Risks of General-Purpose Language Models**
- **Model inversion attacks against collaborative inference**
- **Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning**
- **A Framework for Evaluating Gradient Leakage Attacks in Federated Learning**
- **Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning**
-
Uncategorized
-
Uncategorized
- **An Overview of Privacy in Machine Learning**
- **Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart Privacy Attacks**
- **Privacy and Security Issues in Deep Learning: A Survey**
- **ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models**
- **Membership Inference Attacks on Machine Learning: A Survey**
- **Survey: Leakage and Privacy at Inference Time**
- **A Review of Confidentiality Threats Against Embedded Neural Network Models**
- **Federated Learning Attacks Revisited: A Critical Discussion of Gaps,Assumptions, and Evaluation Setups**
- **I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences**
- **TensorFlow Privacy**
- **PrivacyRaven**
- **Machine Learning Privacy Meter**
- **CypherCat (archive-only)**
- **Adversarial Robustness Toolbox (ART)**
- Awesome Attacks on Machine Learning Privacy ![Awesome - attacks-on-machine-learning-privacy-img-srchttpsawesomerebadgesvg-altawesome)
- **An Overview of Privacy in Machine Learning**
- **SoK: Model Inversion Attack Landscape: Taxonomy, Challenges, and Future Roadmap**
- **A Survey of Privacy Attacks in Machine Learning**
-
-
Property inference / Distribution inference
- **Exploiting unintended feature leakage in collaborative learning** - inference-collaborative-ml))
- **Overlearning Reveals Sensitive Attributes** - vQX2zJ/view?usp=sharing))
- **Subject Property Inference Attack in Collaborative Learning**
- **Property Inference From Poisoning**
- **Property Inference Attacks on Convolutional Neural Networks: Influence and Implications of Target Model's Complexity**
- **Honest-but-Curious Nets: Sensitive Attributes of Private Inputs can be Secretly Coded into the Entropy of Classifiers' Outputs** - but-curious-nets))
- **Property Inference Attacks Against GANs** - Junhao/PIA_GAN))
- **Formalizing and Estimating Distribution Inference Risks**
- **Dissecting Distribution Inference**
- **SNAP: Efficient Extraction of Private Properties with Poisoning** - sp23))
Programming Languages
Categories
Sub Categories
Keywords
machine-learning
3
privacy
3
python
2
inference
2
deep-learning
1
membership-inference
1
model-extraction
1
model-inversion
1
privacy-enhancing-technologies
1
privacy-preserving-machine-learning
1
data-privacy
1
data-protection
1
data-protection-impact-assessment
1
explainable-ai
1
gdpr
1
information-leakage
1
membership-inference-attack
1
privacy-audit
1
adversarial-attacks
1
adversarial-examples
1
adversarial-machine-learning
1
ai
1
artificial-intelligence
1
attack
1
blue-team
1
evasion
1
extraction
1
poisoning
1
red-team
1
trusted-ai
1
trustworthy-ai
1