Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

Projects in Awesome Lists tagged with llm-security

A curated list of projects in awesome lists tagged with llm-security .

https://github.com/pathwaycom/llm-app

Ready-to-run cloud templates for RAG, AI pipelines, and enterprise search with live data. 🐳Docker-friendly.⚡Always in sync with Sharepoint, Google Drive, S3, Kafka, PostgreSQL, real-time data APIs, and more.

chatbot hugging-face llm llm-local llm-prompting llm-security llmops machine-learning open-ai pathway rag real-time retrieval-augmented-generation vector-database vector-index

Last synced: 05 Dec 2024

https://github.com/verazuo/jailbreak_llms

[CCS'24] A dataset consists of 15,140 ChatGPT prompts from Reddit, Discord, websites, and open-source datasets (including 1,405 jailbreak prompts).

chatgpt jailbreak large-language-model llm llm-security prompt

Last synced: 29 Oct 2024

https://github.com/deadbits/vigil-llm

⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs

adversarial-attacks adversarial-machine-learning large-language-models llm-security llmops prompt-injection security-tools yara-scanner

Last synced: 13 Dec 2024

https://github.com/raga-ai-hub/raga-llm-hub

Framework for LLM evaluation, guardrails and security

guardrails llm-evaluation llm-security llmops

Last synced: 10 Nov 2024

https://github.com/lakeraai/pint-benchmark

A benchmark for prompt injection detection systems.

benchmark llm llm-benchmarking llm-security prompt-injection

Last synced: 17 Dec 2024

https://github.com/pdparchitect/llm-hacking-database

This repository contains various attack against Large Language Models.

hacking llm llm-security security

Last synced: 05 Sep 2024

https://github.com/sinanw/llm-security-prompt-injection

This project investigates the security of large language models by performing binary classification of a set of input prompts to discover malicious prompts. Several approaches have been analyzed using classical ML algorithms, a trained LLM model, and a fine-tuned LLM model.

cybersecurity llm-prompting llm-security prompt-injection transformers-models

Last synced: 26 Nov 2024

https://github.com/leondz/lm_risk_cards

Risks and targets for assessing LLMs & LLM vulnerabilities

llm llm-security red-teaming security vulnerability

Last synced: 24 Nov 2024

https://github.com/lakeraai/chainguard

Guard your LangChain applications against prompt injection with Lakera ChainGuard.

langchain langchain-python llm llm-security prompt-injection

Last synced: 08 Nov 2024

https://github.com/levitation-opensource/manipulative-expression-recognition

MER is a software that identifies and highlights manipulative communication in text from human conversations and AI-generated responses. MER benchmarks language models for manipulative expressions, fostering development of transparency and safety in AI. It also supports manipulation victims by detecting manipulative patterns in human communication.

benchmarking conversation-analysis conversation-analytics expression-recognition fraud-detection fraud-prevention human-computer-interaction human-robot-interaction llm llm-security llm-test llm-training manipulation misinformation prompt-engineering prompt-injection psychometrics sentiment-analysis sentiment-classification transparency

Last synced: 22 Nov 2024

https://github.com/aintrust-ai/aixploit

Engineered to help red teams and penetration testers exploit large language model AI solutions vulnerabilities.

adversarial-attacks adversarial-machine-learning chatgpt hacking large-language-models llm-security prompt-injection redteaming

Last synced: 19 Nov 2024