Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-MLSecOps
A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.
https://github.com/RiccardoBiosas/awesome-MLSecOps
Last synced: 5 days ago
JSON representation
-
Threat Modeling
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- AI Villiage: LLM threat modeling
- JSOTIRO/ThreatModels
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
- image
-
MlSecOps pipeline
-
Open Source Security Tools
- ModelScan
- Garak
- TensorFlow Privacy - preserving machine learning algorithms and tools |
- Foolbox
- Advertorch
- Adversarial ML Threat Matrix
- CleverHans
- AdvBox
- Audit AI
- Deep Pwning - pwning is a lightweight framework for experimenting with machine learning models with the goal of evaluating their robustness against a motivated adversary|
- Privacy Meter - source library to audit data privacy in statistical and machine learning algorithms|
- TensorFlow Model Analysis
- PromptInject
- TextAttack
- OpenAttack - Source Package for Textual Adversarial Attack|
- TextFooler
- Flawed Machine Learning Security
- Adversarial Machine Learning CTF
- Damn Vulnerable LLM Project
- Vigil
- PALLMs (Payloads for Attacking Large Language Models)
- AI-exploits
- AnonLLM
- AI Goat
- Pyrit
- Raze to the Ground: Query-Efficient Adversarial HTML Attacks on Machine-Learning Phishing Webpage Detectors - Efficient Adversarial HTML Attacks on Machine-Learning Phishing Webpage Detectors" accepted at AISec '23|
- Giskard - source testing tool for LLM applications|
- Safetensors
- Model-Inversion-Attack-ToolBox
- NeMo-Guardials - based applications to easily add programmable guardrails between the application code and the LLM|
- AugLy
- Knockoffnets
- VGER
- AIShield Watchtower
- NB Defense
- PS-fuzz
- Mindgard-cli
- PurpleLLama3
- Model transparency
- ARTkit - based testing and evaluation of Gen AI applications|
- LangBiTe
- OpenDP
- TF-encrypted
- Mindgard-cli
- Adversarial Robustness Toolbox
- MLSploit - MLsploit is a cloud framework for interactive experimentation with adversarial machine learning research.
- Artificial Intelligence Threat Matrix
- Gandalf Lakera
- Offensive ML Playbook
- Citadel Lens
- Robust Intelligence Continous Validation - Tool for continuous model validation for compliance with standards
-
ML Code Security
- lintML - Security linter for ML, by Nvidia
- differential-privacy-library - Library designed for differential privacy and machine learning
- HiddenLayer: Model as Code - Research about some vectors in ML libraries
-
101 Resources
-
Attack Vectors
-
Community Resources
- Awesome LLM Security
- Awesome AI Security
- MLSecOps Reference Repository
- MlSecOps communtiy
- OWASP AI Exchange
- OWASP Periodic Table of AI Security
- OWASP SLACK
- Awesome LVLM Attack
- Awesome MLLM Safety
- MLSecOps
- MLSecOps Podcast
- OWASP Machine Learning Security Top Ten
- OWASP Top 10 for Large Language Model Applications
- OWASP LLMSVS
- Hackstery
- PWNAI
- AiSec_X_Feed
- HUNTR Discord community
- AI Vulnerability Database
- Incident AI Database
- Defcon AI Village CTF
- MITRE ATLAS™ (Adversarial Threat Landscape for Artificial-Intelligence Systems)
-
Commercial Tools
-
DATA
- Tool for IMG anonymization
- Tool for DATA anonymization
- BMW-Anonymization-Api - based training/inference solutions that we already published/will publish for Object Detection and Semantic Segmentation |
- DeepPrivacy2
- PPAP - space-level Image Anonymization with Adversarial Protector Networks|
- PPAP - space-level Image Anonymization with Adversarial Protector Networks|
- ARX - Data Anonymization Tool
- Data-Veil
-
Blogs and Publications
- DreadNode Paper Stack
- Red-Teaming Large Language Models
- Google's AI red-team
- The MLSecOps Top 10 vulnerabilities
- Token Smuggling Jailbreak via Adversarial Prompt
- We need a new way to measure AI security
- PrivacyRaven: Implementing a proof of concept for model inversion
- Adversarial Prompts Engineering
- TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP
- Trail Of Bits' audit of Hugging Face's safetensors library
- OWASP Top 10 for Large Language Model Applications
- LLM Security
- Is you MLOps infrastructure leaking secrets?
- Embrace The Red, blog where show how u can hack LLM's.
- Audio-jacking: Using generative AI to distort live audio transactions
- HADESS - Web LLM Attacks
- WTF-blog - MlSecOps frameworks ... Which ones are available and what is the difference?
- What is MLSecOps
-
Repositories
- AgentPoison
- DeepPayload
- backdoor
- datafree-model-extraction
- LLMmap
- GoogleCloud-Federated-ML-Pipeline
- Class_Activation_Mapping_Ensemble_Attack
- COLD-Attack
- pal
- ZeroShotKnowledgeTransfer
- GMI-Attack
- Knowledge-Enriched-DMI
- vmi
- Plug-and-Play-Attacks
- snap-sp23
- privacy-vs-robustness
- ML-Leaks
- BlindMI
- python-DP-DL
- MMD-mixup-Defense
- MemGuard
- unsplit
- face_attribute_attack
- FVB
- Malware-GAN
- Generative_Adversarial_Perturbations
- Adversarial-Attacks-with-Relativistic-AdvGAN
- llm-attacks
- LLMs-Finetuning-Safety
- DecodingTrust
- promptbench
- rome
- llmprivacy
- Stealing_DL_Models
-
Books
- Adversarial AI Attacks, Mitigations, and Defense Strategies: A cybersecurity professional's guide to AI attacks, threat modeling, and securing AI with MLSecOps
- Privacy-Preserving Machine Learning
- Generative AI Security: Theories and Practices (Future of Business and Finance)
- Privacy-Preserving Machine Learning
-
Infographics
-
AI Security Market Map
- ![Market Map - for-ai-genai-risks-and-the-emerging-startup-landscape/)
-
-
Repository Stats
-
AI Security Market Map
-
-
Activity
-
AI Security Market Map
-
-
Support Us
-
AI Security Market Map
-
-
MLOps Infrastructure Vulnerabilities
- SILENT SABOTAGE - Study on bot compromise for converting Pickle to SafeTensors
- NOT SO CLEAR: HOW MLOPS SOLUTIONS CAN MUDDY THE WATERS OF YOUR SUPPLY CHAIN - Study on vulnerabilities for the ClearML platform
- Uncovering Azure's Silent Threats: A Journey into Cloud Vulnerabilities - Study on security issues of Azure MLAAS
- The MLOps Security Landscape
- Confused Learning: Supply Chain Attacks through Machine Learning Models
-
Contributors ✨
-
Ai Security Market map
- <img src='https://github.com/riccardobiosas.png?size=50'>
- <img src='https://github.com/badarahmed.png?size=50'>
- <img src='https://github.com/deadbits.png?size=50'>
- <img src='https://github.com/wearetyomsmnv.png?size=50'>
- <img src='https://github.com/anmorgan24.png?size=50'>
- <img src='https://github.com/mik0w.png?size=50'>
-
Programming Languages
Categories
Threat Modeling
57
Open Source Security Tools
51
Repositories
34
Community Resources
22
Blogs and Publications
18
MlSecOps pipeline
11
Attack Vectors
11
DATA
8
101 Resources
7
Repository Stats
6
Contributors ✨
6
MLOps Infrastructure Vulnerabilities
5
Books
4
ML Code Security
3
Commercial Tools
3
Activity
2
License
1
Support Us
1
Infographics
1
Sub Categories
Keywords
machine-learning
17
security
13
adversarial-attacks
13
llm
8
deep-learning
5
privacy
5
adversarial-examples
5
pytorch
5
python
5
adversarial-example
4
large-language-models
3
adversarial-machine-learning
3
model-inversion
3
security-tools
3
natural-language-processing
3
ai
3
differential-privacy
2
supply-chain
2
inference
2
llmops
2
ai-red-team
2
generative-ai
2
ai-safety
2
llm-security
2
awesome
2
ml-safety
2
prompt-engineering
2
red-teaming
2
awesome-list
2
nlp
2
backdoor-attacks
2
docker
2
ctf
2
mlops
2
keras
2
tensorflow
2
trustworthy-ai
2
benchmarking
2
robustness
2
privacy-attacks
2
toolbox
2
red-team-tools
2
mlsecops
2
data-privacy
2
responsible-ai
2
aisecurity
2
model-inversion-attacks
2
ai-security
2
yara-scanner
1
website
1