Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
Awesome-LLMSecOps
LLM | Security | Operations in one github repo with good links and pictures.
https://github.com/wearetyomsmnv/Awesome-LLMSecOps
Last synced: 1 day ago
JSON representation
-
3 types of LLM architecture
-
LLMSecOps Life Cycle
-
Threat Modeling
-
PINT Benchmark scores (by lakera)
- image
- image
- image
- image
- fmops/distilbert-prompt-injection - 06-12 |
- protectai/deberta-v3-base-prompt-injection-v2 - 06-12 |
- Azure AI Prompt Shield for User Prompts - 04-05 |
- Epivolis/Hyperion - 06-12 |
- Myadav/setfit-prompt-injection-MiniLM-L3-v2 - 06-12 |
- stanford university
- Meta Prompt Guard - 07-26 |
- protectai/deberta-v3-base-prompt-injection - 06-12 |
- WhyLabs LangKit - 06-12 |
- deepset/deberta-v3-base-injection - 06-12 |
- Lakera Guard - 06-12 |
- this
- image
- image
- image
- image
-
RAG Security
- How RAG Poisoning Made LLaMA3 Racist
- image
- How RAG Poisoning Made LLaMA3 Racist
- How RAG Poisoning Made LLaMA3 Racist
- How RAG Poisoning Made LLaMA3 Racist
- How RAG Poisoning Made LLaMA3 Racist
- image
- How RAG Poisoning Made LLaMA3 Racist
- Awesome Jailbreak on LLMs - RAG Attacks - based LLM attack techniques |
- How RAG Poisoning Made LLaMA3 Racist
- image
- How RAG Poisoning Made LLaMA3 Racist
- How RAG Poisoning Made LLaMA3 Racist
- image
- How RAG Poisoning Made LLaMA3 Racist
- Security Risks in RAG - Augmented Generation (RAG) |
- Adversarial AI - RAG Attacks and Mitigations
- PoisonedRAG
- ConfusedPilot: Compromising Enterprise Information Integrity and Confidentiality with Copilot for Microsoft 365
- image
- How RAG Poisoning Made LLaMA3 Racist
- image
- How RAG Poisoning Made LLaMA3 Racist
- image
- How RAG Poisoning Made LLaMA3 Racist
- How RAG Poisoning Made LLaMA3 Racist
- How RAG Poisoning Made LLaMA3 Racist
- image
- How RAG Poisoning Made LLaMA3 Racist
- How RAG Poisoning Made LLaMA3 Racist
- How RAG Poisoning Made LLaMA3 Racist
- How RAG Poisoning Made LLaMA3 Racist
- How RAG Poisoning Made LLaMA3 Racist
- How RAG Poisoning Made LLaMA3 Racist
- How RAG Poisoning Made LLaMA3 Racist
- How RAG Poisoning Made LLaMA3 Racist
-
Study resource
- image
- image
- image
- image
- Salt Security Blog: ChatGPT Extensions Vulnerabilities
- Gandalf
- Prompt Airlines
- Invariant Labs CTF 2024
- DeepLearning.AI Red Teaming Course
- AI Battle
- Application Security LLM Testing
- safeguarding-llms
- Damn Vulnerable LLM Agent
- GPT Agents Arena
- Learn Prompting: Offensive Measures
- image
- image
- image
- image
-
OPS
-
LLM Intrpretability
-
3 types of models
-
Jailbreaks
- EasyJailbreak - to-use Python framework to generate adversarial jailbreak prompts | ![GitHub stars](https://img.shields.io/github/stars/EasyJailbreak/EasyJailbreak?style=social) |
- Lakera PINT Benchmark
- HaizeLabs jailbreak Database
- JailbreakBench
- llm-hacking-database
- L1B3RT45
-
PoC
- LLaMator
- OWASP Agentic AI - Pre-release version | ![GitHub stars](https://img.shields.io/github/stars/precize/OWASP-Agentic-AI?style=social) |
- BrokenHill
- Visual Adversarial Examples - Adversarial-Examples-Jailbreak-Large-Language-Models?style=social) |
- Weak-to-Strong Generalization - to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision | ![GitHub stars](https://img.shields.io/github/stars/XuandongZhao/weak-to-strong?style=social) |
- Image Hijacks - based hijacks of large language models | ![GitHub stars](https://img.shields.io/github/stars/euanong/image-hijacks?style=social) |
- CipherChat
- LLMs Finetuning Safety - tuning large language models | ![GitHub stars](https://img.shields.io/github/stars/LLM-Tuning-Safety/LLMs-Finetuning-Safety?style=social) |
- Virtual Prompt Injection - prompt-injection?style=social) |
- FigStep - language Models via Typographic Visual Prompts | ![GitHub stars](https://img.shields.io/github/stars/ThuCCSLab/FigStep?style=social) |
- stealing-part-lm-supplementary - part-lm-supplementary?style=social) |
- Hallucination-Attack - YuanGroup/Hallucination-Attack?style=social) |
- llm-hallucination-survey - hallucination-survey?style=social) |
- LMSanitator - wenlong/LMSanitator?style=social) |
- Imperio - TASR/Imperio?style=social) |
- Backdoor Attacks on Fine-tuned LLaMA - tuned LLaMA Models | ![GitHub stars](https://img.shields.io/github/stars/naimul011/backdoor_attacks_on_fine-tuned_llama?style=social) |
- CBA - Based Authentication for LLM Security | ![GitHub stars](https://img.shields.io/github/stars/MiracleHH/CBA?style=social) |
- MuScleLoRA - scenario Backdoor Fine-tuning of LLMs | ![GitHub stars](https://img.shields.io/github/stars/ZrW00/MuScleLoRA?style=social) |
- BadActs
- TrojText - ML-Research/TrojText?style=social) |
- AnyDoor - sg/AnyDoor?style=social) |
- PromptWare - powered Applications are Vulnerable to PromptWares | ![GitHub stars](https://img.shields.io/github/stars/StavC/PromptWares?style=social) |
-
📊 Community research articles
- 📄 Security ProbLLMs in xAI's Grok
- 📄 Persistent Pre-Training Poisoning of LLMs
- 📄 Navigating the Risks: A Survey of Security, Privacy, and Ethics Threats in LLM-Based Agents
- 📄 Detecting Prompt Injection: BERT-based Classifier
- 📄 Practical LLM Security: Takeaways From a Year in the Trenches
- 📄 Bypassing Meta's LLaMA Classifier: A Simple Jailbreak
- 📄 Vulnerabilities in LangChain Gen AI
-
💡 Best Practices
-
🌐 Community
- LLMbotomy: Shutting the Trojan Backdoors
- Mind the Data Gap: Privacy Challenges in Autonomous AI Agents - agent AI systems |
- OWASP SLACK - top10-for-llm<br>• #ml-risk-top5<br>• #project-ai-community<br>• #project-mlsec-top10<br>• #team-llm_ai-secgov<br>• #team-llm-redteam<br>• #team-llm-v2-brainstorm |
- Awesome LLM Security
- PWNAI
- AiSec_X_Feed
- LVE_Project
- Lakera AI Security resource hub
- llm-testing-findings
- llm-testing-findings
-
Benchmarks
- LLM Security Guidance Benchmarks - source LLMs for security guidance effectiveness using SECURE dataset | ![GitHub stars](https://img.shields.io/github/stars/davisconsultingservices/llm_security_guidance_benchmarks?style=social) |
- SECURE
- NIST AI TEVV
- Taming the Beast: Inside the Llama 3 Red Teaming Process
-
Architecture risks
-
Monitoring
-
Watermarking
- MarkLLM - Source Toolkit for LLM Watermarking. |
-
Agentic security
-
🎓 Tutorials
-
DATA
-
📚 Books
- The Developer's Playbook for Large Language Model Security
- Adversarial AI Attacks, Mitigations, and Defense Strategies: A cybersecurity professional's guide to AI attacks, threat modeling, and securing AI with MLSecOps
- Generative AI Security: Theories and Practices (Future of Business and Finance) - depth exploration of security theories, laws, terms and practices in Generative AI |
-
🏗 Frameworks
Programming Languages
Categories
RAG Security
36
Threat Modeling
34
LLMSecOps Life Cycle
25
PoC
22
PINT Benchmark scores (by lakera)
20
3 types of LLM architecture
20
Study resource
19
🌐 Community
10
OPS
8
📊 Community research articles
7
DATA
7
Jailbreaks
6
Agentic security
5
Benchmarks
4
3 types of models
3
💡 Best Practices
3
📚 Books
3
🎓 Tutorials
2
🏗 Frameworks
2
Architecture risks
1
LLM Intrpretability
1
Watermarking
1
Monitoring
1
Sub Categories
Keywords
llm
16
security
7
ai
6
jailbreak
4
llm-security
4
prompt-injection
3
nlp
3
machine-learning
3
large-language-models
3
hallucinations
3
chatgpt
2
gpt-4
2
rag
2
alignment
2
awesome-list
2
benchmark
2
cybersecurity
2
ai-security
2
genai
2
vlm
2
safety
2
vulnerability-assessment
1
security-tools
1
red-teaming
1
secure
1
llm-benchmarking
1
red-team-framework
1
red-team
1
rag-evaluation
1
python
1
misinformation
1
llm-testing
1
llm-red-team
1
jailbreaks
1
framework
1
attack
1
vlms
1
privacy
1
transformers-models
1
llm-prompting
1
templates
1
reporting
1
penetration-testing
1
gpt
1
awesome
1
research-paper
1
research
1
aisecurity
1
backdoor-attacks
1
reading-list
1