An open API service indexing awesome lists of open source software.

awesome-security-for-ai

Awesome products for securing AI systems includes open source and commercial options and an infographic licensed CC-BY-SA-4.0.
https://github.com/zmre/awesome-security-for-ai

Last synced: 3 days ago
JSON representation

  • Encryption and Data Protection

    • Enveil Secure AI - Train encrypted models and do encrypted inferences over them.
    • IronCore Labs' Cloaked AI - Encrypt vector embeddings before sending to a vector database to secure the data in RAG workflows and other AI workflows. [![code](https://img.shields.io/github/license/ironcorelabs/ironcore-alloy)](https://github.com/ironcorelabs/ironcore-alloy/)
    • Enveil Secure AI - Train encrypted models and do encrypted inferences over them.
    • IronCore Labs' Cloaked AI - Encrypt vector embeddings before sending to a vector database to secure the data in RAG workflows and other AI workflows. [![code](https://img.shields.io/github/license/ironcorelabs/ironcore-alloy)](https://github.com/ironcorelabs/ironcore-alloy/)
    • Enveil Secure AI - Train encrypted models and do encrypted inferences over them.
  • Prompt Firewall and Redaction

    • Private AI - Detect, anonymize, and replace PII with less than half the error rate of alternatives.
    • Protect AI LLM Guard - Suite of tools to protect LLM applications by helping you detect, redact, and sanitize LLM prompts and responses. [![code](https://img.shields.io/github/license/protectai/llm-guard)](https://github.com/protectai/llm-guard/)
    • Protect AI Rebuff - A LLM prompt injection detector. [![code](https://img.shields.io/github/license/protectai/rebuff)](https://github.com/protectai/rebuff/)
    • HiddenLayer AI Detection and Response - Proactively defend against threats to your LLMs.
    • Robust Intelligence AI Firewall - Real-time protection, automatically configured to address the vulnerabilities of each model.
    • Vigil LLM - Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs. ![code](https://img.shields.io/github/license/deadbits/vigil-llm)
    • Lakera Guard - Protection from prompt injections, data loss, and toxic content.
    • Arthur Shield - Built-in, real-time firewall protection against the biggest LLM risks.
    • Prompt Security - SDK and proxy for protection against common prompt attacks.
    • Protect AI Rebuff - A LLM prompt injection detector. [![code](https://img.shields.io/github/license/protectai/rebuff)](https://github.com/protectai/rebuff/)
    • Protect AI LLM Guard - Suite of tools to protect LLM applications by helping you detect, redact, and sanitize LLM prompts and responses. [![code](https://img.shields.io/github/license/protectai/llm-guard)](https://github.com/protectai/llm-guard/)
    • HiddenLayer AI Detection and Response - Proactively defend against threats to your LLMs.
    • Robust Intelligence AI Firewall - Real-time protection, automatically configured to address the vulnerabilities of each model.
    • Vigil LLM - Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs. ![code](https://img.shields.io/github/license/deadbits/vigil-llm)
    • Lakera Guard - Protection from prompt injections, data loss, and toxic content.
    • Arthur Shield - Built-in, real-time firewall protection against the biggest LLM risks.
    • Prompt Security - SDK and proxy for protection against common prompt attacks.
    • Private AI - Detect, anonymize, and replace PII with less than half the error rate of alternatives.
    • DynamoGuard - Identify / defend against any type of non-compliance as defined by your specific AI policies and catch attacks.
    • Skyflow LLM Privacy Vault - Redacts PII from prompts flowing to LLMs.
    • Skyflow LLM Privacy Vault - Redacts PII from prompts flowing to LLMs.
    • Private AI - Detect, anonymize, and replace PII with less than half the error rate of alternatives.
    • DynamoGuard - Identify / defend against any type of non-compliance as defined by your specific AI policies and catch attacks.
  • Confidential Computing

  • Governance

  • Model Testing

    • HiddenLayer Model Scanner - Scan models for vulnerabilities and supply chain issues.
    • Plexiglass - A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs). ![code](https://img.shields.io/github/license/kortex-labs/plexiglass)
    • Garak - A LLM vulnerability scanner. [![code](https://img.shields.io/github/license/leondz/garak)](https://github.com/leondz/garak/)
    • CalypsoAI Platform - Platform for testing and launching LLM applications securely.
    • jailbreak-evaluation - Python package for language model jailbreak evaluation. ![code](https://img.shields.io/github/license/controllability/jailbreak-evaluation)
    • Adversa Red Teaming - Continuous AI red teaming for LLMs.
    • HiddenLayer Model Scanner - Scan models for vulnerabilities and supply chain issues.
    • Plexiglass - A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs). ![code](https://img.shields.io/github/license/kortex-labs/plexiglass)
    • Garak - A LLM vulnerability scanner. [![code](https://img.shields.io/github/license/leondz/garak)](https://github.com/leondz/garak/)
    • CalypsoAI Platform - Platform for testing and launching LLM applications securely.
    • jailbreak-evaluation - Python package for language model jailbreak evaluation. ![code](https://img.shields.io/github/license/controllability/jailbreak-evaluation)
    • Adversa Red Teaming - Continuous AI red teaming for LLMs.
    • Advai - Automates the tasks of stress-testing, red-teaming, and evaluating your AI systems for critical failure.
    • Mindgard AI - Identifies and remediates risks across AI models, GenAI, LLMs along with AI-powered apps and chatbots.
    • Protect AI ModelScan - Scan models for serialization attacks. [![code](https://img.shields.io/github/license/protectai/modelscan)](https://github.com/protectai/modelscan)
    • Protect AI Guardian - Scan models for security issues or policy violations with auditing and reporting.
    • Advai - Automates the tasks of stress-testing, red-teaming, and evaluating your AI systems for critical failure.
    • Mindgard AI - Identifies and remediates risks across AI models, GenAI, LLMs along with AI-powered apps and chatbots.
    • Protect AI ModelScan - Scan models for serialization attacks. [![code](https://img.shields.io/github/license/protectai/modelscan)](https://github.com/protectai/modelscan)
    • Protect AI Guardian - Scan models for security issues or policy violations with auditing and reporting.
  • QA

    • Prompt Security Fuzzer - Open-source tool to help you harden your GenAI applications. [![code](https://img.shields.io/github/license/prompt-security/ps-fuzz)](https://github.com/prompt-security/ps-fuzz/)
    • LLMFuzzer - Open-source fuzzing framework specifically designed for LLMs, especially for their integrations in applications via APIs. ![code](https://img.shields.io/github/license/mnns/LLMFuzzer)
    • Prompt Security Fuzzer - Open-source tool to help you harden your GenAI applications. [![code](https://img.shields.io/github/license/prompt-security/ps-fuzz)](https://github.com/prompt-security/ps-fuzz/)
    • LLMFuzzer - Open-source fuzzing framework specifically designed for LLMs, especially for their integrations in applications via APIs. ![code](https://img.shields.io/github/license/mnns/LLMFuzzer)
  • Training Data Protection

    • Protopia AI - "Stained glass transforms" of text and image data when training preserves privacy in model and inferences.
    • Protopia AI - "Stained glass transforms" of text and image data when training preserves privacy in model and inferences.