Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
Projects in Awesome Lists tagged with llm-security
A curated list of projects in awesome lists tagged with llm-security .
https://github.com/pathwaycom/llm-app
Ready-to-run cloud templates for RAG, AI pipelines, and enterprise search with live data. 🐳Docker-friendly.⚡Always in sync with Sharepoint, Google Drive, S3, Kafka, PostgreSQL, real-time data APIs, and more.
chatbot hugging-face llm llm-local llm-prompting llm-security llmops machine-learning open-ai pathway rag real-time retrieval-augmented-generation vector-database vector-index
Last synced: 05 Dec 2024
https://github.com/Giskard-AI/giskard
🐢 Open-Source Evaluation & Testing for ML & LLM systems
ai-red-team ai-safety ai-security ai-testing ethical-artificial-intelligence evaluation-framework fairness-ai llm llm-eval llm-evaluation llm-security llmops ml-safety ml-testing ml-validation mlops rag-evaluation red-team-tools responsible-ai trustworthy-ai
Last synced: 08 Nov 2024
https://github.com/giskard-ai/giskard
🐢 Open-Source Evaluation & Testing for ML & LLM systems
ai-red-team ai-safety ai-security ai-testing ethical-artificial-intelligence evaluation-framework fairness-ai llm llm-eval llm-evaluation llm-security llmops ml-safety ml-testing ml-validation mlops rag-evaluation red-team-tools responsible-ai trustworthy-ai
Last synced: 17 Dec 2024
https://github.com/nvidia/garak
the LLM vulnerability scanner
ai llm-evaluation llm-security security-scanners vulnerability-assessment
Last synced: 15 Dec 2024
https://github.com/verazuo/jailbreak_llms
[CCS'24] A dataset consists of 15,140 ChatGPT prompts from Reddit, Discord, websites, and open-source datasets (including 1,405 jailbreak prompts).
chatgpt jailbreak large-language-model llm llm-security prompt
Last synced: 29 Oct 2024
https://github.com/NVIDIA/garak
the LLM vulnerability scanner
ai llm-evaluation llm-security security-scanners vulnerability-assessment
Last synced: 18 Nov 2024
https://github.com/protectai/llm-guard
The Security Toolkit for LLM Interactions
adversarial-machine-learning chatgpt large-language-models llm llm-security llmops prompt-engineering prompt-injection security-tools transformers
Last synced: 21 Dec 2024
https://github.com/mariocandela/beelzebub
A secure low code honeypot framework, leveraging AI for System Virtualization.
cloudnative cloudsecurity cybersecurity framework go golang honeypot kubernetes llama3 llm llm-honeypot llm-security low-code ollama openai research research-project security whitehat
Last synced: 20 Dec 2024
https://github.com/deadbits/vigil-llm
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
adversarial-attacks adversarial-machine-learning large-language-models llm-security llmops prompt-injection security-tools yara-scanner
Last synced: 13 Dec 2024
https://github.com/r3drun3/sploitcraft
🏴☠️ Hacking Guides, Demos and Proof-of-Concepts 🥷
ai aws cloud container-security cybersecurity docker hacking hacking-tutorials linux llm-security network-security offensive-security proof-of-concept python redteam tutorials web-vulnerabilities windows
Last synced: 21 Dec 2024
https://github.com/raga-ai-hub/raga-llm-hub
Framework for LLM evaluation, guardrails and security
guardrails llm-evaluation llm-security llmops
Last synced: 10 Nov 2024
https://github.com/arekusandr/last_layer
Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️
chatgpt-prompts jailbreak large-language-models llm-guard llm-guardrails llm-local llm-security prompt-engineering security-tools
Last synced: 21 Sep 2024
https://github.com/lakeraai/pint-benchmark
A benchmark for prompt injection detection systems.
benchmark llm llm-benchmarking llm-security prompt-injection
Last synced: 17 Dec 2024
https://github.com/yevh/taac-ai
AI-driven Threat modeling-as-a-Code (TaaC-AI)
ai application-security claude-3 devsecops gpt gpt-3 gpt-4 llm-security mistral-7b secure-development taac threat threat-modeling threat-modeling-from-code threat-modeling-tool threat-models
Last synced: 17 Dec 2024
https://github.com/llm-platform-security/SecGPT
SecGPT: An execution isolation architecture for LLM-based systems
chatgpt gpt isolation langchain llm llm-agent llm-based-systems llm-framework llm-platform llm-privacy llm-security multi-agent-systems openai-api sandbox
Last synced: 28 Oct 2024
https://github.com/pdparchitect/llm-hacking-database
This repository contains various attack against Large Language Models.
hacking llm llm-security security
Last synced: 05 Sep 2024
https://github.com/sinanw/llm-security-prompt-injection
This project investigates the security of large language models by performing binary classification of a set of input prompts to discover malicious prompts. Several approaches have been analyzed using classical ML algorithms, a trained LLM model, and a fine-tuned LLM model.
cybersecurity llm-prompting llm-security prompt-injection transformers-models
Last synced: 26 Nov 2024
https://github.com/leondz/lm_risk_cards
Risks and targets for assessing LLMs & LLM vulnerabilities
llm llm-security red-teaming security vulnerability
Last synced: 24 Nov 2024
https://github.com/msoedov/agentic_security
Agentic LLM Vulnerability Scanner
llm-guardrails llm-jailbreaks llm-scanner llm-security llm-vulnerabilities owasp-llm-top-10
Last synced: 13 Dec 2024
https://github.com/lakeraai/chainguard
Guard your LangChain applications against prompt injection with Lakera ChainGuard.
langchain langchain-python llm llm-security prompt-injection
Last synced: 08 Nov 2024
https://github.com/levitation-opensource/manipulative-expression-recognition
MER is a software that identifies and highlights manipulative communication in text from human conversations and AI-generated responses. MER benchmarks language models for manipulative expressions, fostering development of transparency and safety in AI. It also supports manipulation victims by detecting manipulative patterns in human communication.
benchmarking conversation-analysis conversation-analytics expression-recognition fraud-detection fraud-prevention human-computer-interaction human-robot-interaction llm llm-security llm-test llm-training manipulation misinformation prompt-engineering prompt-injection psychometrics sentiment-analysis sentiment-classification transparency
Last synced: 22 Nov 2024
https://github.com/azminewasi/Awesome-LLMs-ICLR-24
It is a comprehensive resource hub compiling all LLM papers accepted at the International Conference on Learning Representations (ICLR) in 2024.
large-language-model large-language-models large-language-models-and-translation-systems large-language-models-for-graph-learning llm llm-agent llm-evaluation llm-framework llm-inference llm-privacy llm-prompting llm-security llm-serving llm-training llmops llms pretrained-language-model pretrained-models pretrained-weights
Last synced: 26 Sep 2024
https://github.com/balavenkatesh3322/guardrails-demo
LLM Security Project with Llama Guard
aisecurity attack-defense generative-ai llama-2 llama-guard llm llm-security llmops prompt-injection-tool security
Last synced: 10 Nov 2024
https://github.com/aintrust-ai/aixploit
Engineered to help red teams and penetration testers exploit large language model AI solutions vulnerabilities.
adversarial-attacks adversarial-machine-learning chatgpt hacking large-language-models llm-security prompt-injection redteaming
Last synced: 19 Nov 2024
https://github.com/nagababumo/red-teaming-llm-applications
giskard jailbreak llm-security prompt-injection red-teaming
Last synced: 14 Nov 2024