awesome-security-for-ai
Awesome products for securing AI systems includes open source and commercial options and an infographic licensed CC-BY-SA-4.0.
https://github.com/zmre/awesome-security-for-ai
Last synced: 3 days ago
JSON representation
-
Encryption and Data Protection
- Enveil Secure AI - Train encrypted models and do encrypted inferences over them.
- IronCore Labs' Cloaked AI - Encrypt vector embeddings before sending to a vector database to secure the data in RAG workflows and other AI workflows. [](https://github.com/ironcorelabs/ironcore-alloy/)
- Enveil Secure AI - Train encrypted models and do encrypted inferences over them.
- IronCore Labs' Cloaked AI - Encrypt vector embeddings before sending to a vector database to secure the data in RAG workflows and other AI workflows. [](https://github.com/ironcorelabs/ironcore-alloy/)
- Enveil Secure AI - Train encrypted models and do encrypted inferences over them.
-
Prompt Firewall and Redaction
- Private AI - Detect, anonymize, and replace PII with less than half the error rate of alternatives.
- Protect AI LLM Guard - Suite of tools to protect LLM applications by helping you detect, redact, and sanitize LLM prompts and responses. [](https://github.com/protectai/llm-guard/)
- Protect AI Rebuff - A LLM prompt injection detector. [](https://github.com/protectai/rebuff/)
- HiddenLayer AI Detection and Response - Proactively defend against threats to your LLMs.
- Robust Intelligence AI Firewall - Real-time protection, automatically configured to address the vulnerabilities of each model.
- Vigil LLM - Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs. 
- Lakera Guard - Protection from prompt injections, data loss, and toxic content.
- Arthur Shield - Built-in, real-time firewall protection against the biggest LLM risks.
- Prompt Security - SDK and proxy for protection against common prompt attacks.
- Protect AI Rebuff - A LLM prompt injection detector. [](https://github.com/protectai/rebuff/)
- Protect AI LLM Guard - Suite of tools to protect LLM applications by helping you detect, redact, and sanitize LLM prompts and responses. [](https://github.com/protectai/llm-guard/)
- HiddenLayer AI Detection and Response - Proactively defend against threats to your LLMs.
- Robust Intelligence AI Firewall - Real-time protection, automatically configured to address the vulnerabilities of each model.
- Vigil LLM - Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs. 
- Lakera Guard - Protection from prompt injections, data loss, and toxic content.
- Arthur Shield - Built-in, real-time firewall protection against the biggest LLM risks.
- Prompt Security - SDK and proxy for protection against common prompt attacks.
- Private AI - Detect, anonymize, and replace PII with less than half the error rate of alternatives.
- DynamoGuard - Identify / defend against any type of non-compliance as defined by your specific AI policies and catch attacks.
- Skyflow LLM Privacy Vault - Redacts PII from prompts flowing to LLMs.
- Skyflow LLM Privacy Vault - Redacts PII from prompts flowing to LLMs.
- Private AI - Detect, anonymize, and replace PII with less than half the error rate of alternatives.
- DynamoGuard - Identify / defend against any type of non-compliance as defined by your specific AI policies and catch attacks.
-
Related Awesome Lists
- ottosulin/awesome-ai-security - AI security related frameworks, attacks, tools and papers.
- ottosulin/awesome-ai-security - AI security related frameworks, attacks, tools and papers.
- deepspaceharbor/awesome-ai-security - AI security resources including attacks, examples, and code.
- deepspaceharbor/awesome-ai-security - AI security resources including attacks, examples, and code.
- awesome-ai-for-cybersecurity - Research roundup on AI's use in classic security tools.
- awesome-ml-privacy-attacks - An awesome list of papers on privacy attacks against machine learning.
- awesome-llm-security - A curation of awesome tools, documents and projects about LLM Security.
- awesome-ml-security - Trail of Bits' machine learning security references, guidance, and tools.
- awesome-ai-for-cybersecurity - Research roundup on AI's use in classic security tools.
- awesome-llm-security - A curation of awesome tools, documents and projects about LLM Security.
- awesome-ml-privacy-attacks - An awesome list of papers on privacy attacks against machine learning.
- awesome-ml-security - Trail of Bits' machine learning security references, guidance, and tools.
-
Confidential Computing
- Fortanix Confidential AI - Run AI models inside Intel SGX and other enclave technologies.
- Fortanix Confidential AI - Run AI models inside Intel SGX and other enclave technologies.
-
Governance
- OneTrust AI Governance - Track projects and apply frameworks to them.
- Cranium AI Exposure Management Solution - Provide visibility into an AI system, characterize attack surfaces, and assess vulnerabilities in an organization.
- DynamoEval - Provides automated stress testing of AI systems and autogenerates documentation needed for regulatory audits.
- DynamoEval - Provides automated stress testing of AI systems and autogenerates documentation needed for regulatory audits.
- OneTrust AI Governance - Track projects and apply frameworks to them.
- Cranium AI Exposure Management Solution - Provide visibility into an AI system, characterize attack surfaces, and assess vulnerabilities in an organization.
-
Model Testing
- HiddenLayer Model Scanner - Scan models for vulnerabilities and supply chain issues.
- Plexiglass - A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs). 
- Garak - A LLM vulnerability scanner. [](https://github.com/leondz/garak/)
- CalypsoAI Platform - Platform for testing and launching LLM applications securely.
- jailbreak-evaluation - Python package for language model jailbreak evaluation. 
- Adversa Red Teaming - Continuous AI red teaming for LLMs.
- HiddenLayer Model Scanner - Scan models for vulnerabilities and supply chain issues.
- Plexiglass - A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs). 
- Garak - A LLM vulnerability scanner. [](https://github.com/leondz/garak/)
- CalypsoAI Platform - Platform for testing and launching LLM applications securely.
- jailbreak-evaluation - Python package for language model jailbreak evaluation. 
- Adversa Red Teaming - Continuous AI red teaming for LLMs.
- Advai - Automates the tasks of stress-testing, red-teaming, and evaluating your AI systems for critical failure.
- Mindgard AI - Identifies and remediates risks across AI models, GenAI, LLMs along with AI-powered apps and chatbots.
- Protect AI ModelScan - Scan models for serialization attacks. [](https://github.com/protectai/modelscan)
- Protect AI Guardian - Scan models for security issues or policy violations with auditing and reporting.
- Advai - Automates the tasks of stress-testing, red-teaming, and evaluating your AI systems for critical failure.
- Mindgard AI - Identifies and remediates risks across AI models, GenAI, LLMs along with AI-powered apps and chatbots.
- Protect AI ModelScan - Scan models for serialization attacks. [](https://github.com/protectai/modelscan)
- Protect AI Guardian - Scan models for security issues or policy violations with auditing and reporting.
-
QA
- Prompt Security Fuzzer - Open-source tool to help you harden your GenAI applications. [](https://github.com/prompt-security/ps-fuzz/)
- LLMFuzzer - Open-source fuzzing framework specifically designed for LLMs, especially for their integrations in applications via APIs. 
- Prompt Security Fuzzer - Open-source tool to help you harden your GenAI applications. [](https://github.com/prompt-security/ps-fuzz/)
- LLMFuzzer - Open-source fuzzing framework specifically designed for LLMs, especially for their integrations in applications via APIs. 
-
Training Data Protection
- Protopia AI - "Stained glass transforms" of text and image data when training preserves privacy in model and inferences.
- Protopia AI - "Stained glass transforms" of text and image data when training preserves privacy in model and inferences.
Programming Languages
Categories
Sub Categories
Keywords
machine-learning
8
awesome
4
awesome-list
4
llm
4
security
4
cybersecurity
4
adversarial-machine-learning
4
adversarial-attacks
4
anti-malware
2
anti-spam
2
firewall
2
intrusion-detection
2
privacy
2
deep-learning
2
deep-neural-networks
2
large-language-models
2
llm-security
2
llmops
2
prompt-injection
2
security-tools
2
yara-scanner
2
ai
2
llmsecurity
2