Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
Awesome-ML-SP-Papers
A curated list of Meachine learning Security & Privacy papers published in security top-4 conferences (IEEE S&P, ACM CCS, USENIX Security and NDSS).
https://github.com/gnipping/Awesome-ML-SP-Papers
Last synced: 2 days ago
JSON representation
-
2. Privacy Papers
-
2.2 Model
-
2.3 User Related Privacy
-
2.4 Private ML Protocols
-
2.5 Platform
-
2.6 Differential Privacy
-
2.1 Training Data
- [pdf - dario/SplitNN_FSHA)]
- [pdf - lab/machine-unlearning)]
- [pdf - Unlearning)]
- [pdf - sp23)]
- [pdf - ppds.github.io/)]
- [pdf - lab/machine-unlearning)]
- [pdf - Unlearning)]
- [pdf - sp23)]
- [pdf - dario/SplitNN_FSHA)]
- [pdf - Junhao/PIA_GAN)]
- [pdf - inversion.github.io/mirror/)]
- [pdf - group/membership-inference-evaluation)]
- [pdf - lab/rofl-project-code)]
- [pdf - Embedding-Leaks)]
- [pdf - Attack-paper-NDSS24-/tree/main)]
- [pdf - official/SLMIA-SR)]
- [pdf - Proactive)]
- [pdf - LAB/Over-unlearning)]
- [pdf - zju/ORL-Auditor)]
- [pdf - Attack-paper-NDSS24-/tree/main)]
- [pdf - official/SLMIA-SR)]
- [pdf - lab/rofl-project-code)]
- [pdf - Proactive)]
- [pdf - LAB/Over-unlearning)]
- [pdf - ppds.github.io/)]
- [pdf - zju/ORL-Auditor)]
-
-
Contributing
-
2.6 Differential Privacy
-
-
Licenses
-
1. Security Papers
-
1.1 Adversarial Attack & Defense
- [pdf - lab/imc)]
- [pdf - adversarial-attack/FAKEBOB)]
- [pdf - braunschweig.de/sec/research/code/imitator)]
- [pdf - Access-Instructions)]
- [pdf - global-properties)]
- [pdf - secure/semantic-randomized-smoothing)]
- [pdf - group/active-learning)]
- [pdf - group/ObjectSeeker)]
- [pdf - uni-lu/realistic_adversarial_hardening)]
- [pdf - Intelligence-Lab/SABRE)])]
- [pdf - Algorithm-Lab/Scores_Tell_Everything_about_Bob)])]
- [pdf - codes/TransferAttackSurrogates)])]
- [pdf - ov-file)]
- [pdf - cullen/ensemble-simplex-certifications)]
- [pdf - SysSec/dompteur)]
- [pdf - adversarial-attack/FAKEBOB)]
- [pdf - braunschweig.de/sec/research/code/imitator)]
- [pdf - Access-Instructions)]
- [pdf - lived-adversarial-perturbations)]
- [pdf - Beihang/X-adv)]
- [pdf - lab/ContraNet)]
- [pdf - attacks.net/)]
- [pdf - Lab/HSJA)]
- [pdf - Attack)]
- [pdf - group/PatchGuard)]
- [pdf - Shan/trapdoor)]
- [pdf - lab/imc)]
- [pdf - group/DetectorGuard)]
- [pdf - braunschweig.de/sec/research/code/imitator)]
- [pdf - global-properties)]
- [pdf - secure/semantic-randomized-smoothing)]
- [pdf - group/active-learning)]
- [pdf - group/ObjectSeeker)]
- [pdf - uni-lu/realistic_adversarial_hardening)]
- [pdf - UMass/BLANKET)]
- [pdf - Lab/HSJA)]
- [pdf - codes/DorPatch)])]
- [pdf - codes/FeatureIndistinguishableAttack)]
- [pdf - group/DetectorGuard)]
- [pdf - codes/TransferAttackSurrogates)])]
- [pdf - Beihang/X-adv)]
- [pdf - ov-file)]
- [pdf - cullen/ensemble-simplex-certifications)]
-
1.2 Distributed Machine Learning
-
1.3 Data Poisoning
-
1.4 Backdoor
- [pdf - secure/Meta-Nerual-Trojan-Detection)]
- [pdf - jia-group/Narcissus-backdoor-attack)]
- [pdf - BD)]
- [pdf - ov-file)]
- [pdf - secure/Meta-Nerual-Trojan-Detection)]
- [pdf - free_Backdoor)]
- [pdf - jia-group/Narcissus-backdoor-attack)]
- [pdf - jia-group/ASSET)]
- [pdf - Miner)]
- [pdf - secure/TextGuard)]
- [pdf - Spikes)]
- [pdf - BD)]
- [pdf - secure/TextGuard)]
- [pdf - Spikes)]
-
1.6 AI4Security
- [pdf - purpose-client-side-scanning)]
- [pdf - prompt)]
- [pdf - Laboratory/Flash-IDS)]
- [pdf - purpose-client-side-scanning)]
- [pdf - anomaly-attribution)]
-
1.8 Hardware Related Security
-
1.9 Security Related Interpreting Method
-
1.11 LLM Security
-
1.5 ML Library Security
-
1.7 AutoML Security
-
1.10 Face Security
-
1.10 AI Generation Security
-
Sub Categories
1.1 Adversarial Attack & Defense
134
2.1 Training Data
96
1.4 Backdoor
44
1.6 AI4Security
40
2.2 Model
35
1.2 Distributed Machine Learning
28
1.3 Data Poisoning
18
2.4 Private ML Protocols
16
2.6 Differential Privacy
12
1.11 LLM Security
12
1.9 Security Related Interpreting Method
6
2.3 User Related Privacy
4
1.10 Face Security
4
1.8 Hardware Related Security
4
2.5 Platform
3
1.5 ML Library Security
2
1.10 AI Generation Security
2
1.7 AutoML Security
1