Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
Awesome-LLMSecOps
LLM | Security | Operations in one github repo with good links and pictures.
https://github.com/wearetyomsmnv/Awesome-LLMSecOps
Last synced: 1 day ago
JSON representation
-
3 types of LLM architecture
-
LLMSecOps Life Cycle
-
Threat Modeling
-
PINT Benchmark scores (by lakera)
- image
- image
- image
- image
- fmops/distilbert-prompt-injection - 06-12 |
- protectai/deberta-v3-base-prompt-injection-v2 - 06-12 |
- Azure AI Prompt Shield for User Prompts - 04-05 |
- Epivolis/Hyperion - 06-12 |
- Myadav/setfit-prompt-injection-MiniLM-L3-v2 - 06-12 |
- stanford university
- Meta Prompt Guard - 07-26 |
- protectai/deberta-v3-base-prompt-injection - 06-12 |
- WhyLabs LangKit - 06-12 |
- deepset/deberta-v3-base-injection - 06-12 |
- Lakera Guard - 06-12 |
- this
- image
- image
- image
- image
-
RAG Security
- How RAG Poisoning Made LLaMA3 Racist
- image
- How RAG Poisoning Made LLaMA3 Racist
- How RAG Poisoning Made LLaMA3 Racist
- How RAG Poisoning Made LLaMA3 Racist
- How RAG Poisoning Made LLaMA3 Racist
- image
- How RAG Poisoning Made LLaMA3 Racist
- image
- How RAG Poisoning Made LLaMA3 Racist
- How RAG Poisoning Made LLaMA3 Racist
- image
- How RAG Poisoning Made LLaMA3 Racist
- Security Risks in RAG - Augmented Generation (RAG) |
- Adversarial AI - RAG Attacks and Mitigations
- PoisonedRAG
- ConfusedPilot: Compromising Enterprise Information Integrity and Confidentiality with Copilot for Microsoft 365
- image
- How RAG Poisoning Made LLaMA3 Racist
- image
- How RAG Poisoning Made LLaMA3 Racist
- image
- How RAG Poisoning Made LLaMA3 Racist
- How RAG Poisoning Made LLaMA3 Racist
- image
- How RAG Poisoning Made LLaMA3 Racist
- How RAG Poisoning Made LLaMA3 Racist
- How RAG Poisoning Made LLaMA3 Racist
- How RAG Poisoning Made LLaMA3 Racist
- How RAG Poisoning Made LLaMA3 Racist
-
Study resource
- image
- image
- image
- image
- Salt Security Blog: ChatGPT Extensions Vulnerabilities
- Gandalf
- Prompt Airlines
- Invariant Labs CTF 2024
- DeepLearning.AI Red Teaming Course
- AI Battle
- Application Security LLM Testing
- safeguarding-llms
- Damn Vulnerable LLM Agent
- GPT Agents Arena
- Learn Prompting: Offensive Measures
- image
- image
- image
- image
-
OPS
-
LLM Intrpretability
-
Architecture risks
-
Jailbreaks
-
Monitoring
-
Watermarking
- MarkLLM - Source Toolkit for LLM Watermarking. |
-
Agentic security
-
PoC
- Visual Adversarial Examples - Adversarial-Examples-Jailbreak-Large-Language-Models?style=social) |
- Weak-to-Strong Generalization - to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision | ![GitHub stars](https://img.shields.io/github/stars/XuandongZhao/weak-to-strong?style=social) |
- Image Hijacks - based hijacks of large language models | ![GitHub stars](https://img.shields.io/github/stars/euanong/image-hijacks?style=social) |
- CipherChat
- LLMs Finetuning Safety - tuning large language models | ![GitHub stars](https://img.shields.io/github/stars/LLM-Tuning-Safety/LLMs-Finetuning-Safety?style=social) |
- Virtual Prompt Injection - prompt-injection?style=social) |
- FigStep - language Models via Typographic Visual Prompts | ![GitHub stars](https://img.shields.io/github/stars/ThuCCSLab/FigStep?style=social) |
- stealing-part-lm-supplementary - part-lm-supplementary?style=social) |
- Hallucination-Attack - YuanGroup/Hallucination-Attack?style=social) |
- llm-hallucination-survey - hallucination-survey?style=social) |
- LMSanitator - wenlong/LMSanitator?style=social) |
- Imperio - TASR/Imperio?style=social) |
- Backdoor Attacks on Fine-tuned LLaMA - tuned LLaMA Models | ![GitHub stars](https://img.shields.io/github/stars/naimul011/backdoor_attacks_on_fine-tuned_llama?style=social) |
- CBA - Based Authentication for LLM Security | ![GitHub stars](https://img.shields.io/github/stars/MiracleHH/CBA?style=social) |
- MuScleLoRA - scenario Backdoor Fine-tuning of LLMs | ![GitHub stars](https://img.shields.io/github/stars/ZrW00/MuScleLoRA?style=social) |
- BadActs
- TrojText - ML-Research/TrojText?style=social) |
- AnyDoor - sg/AnyDoor?style=social) |
- PromptWare - powered Applications are Vulnerable to PromptWares | ![GitHub stars](https://img.shields.io/github/stars/StavC/PromptWares?style=social) |
-
📊 Community research articles
-
🎓 Tutorials
-
DATA
-
🌐 Community
- OWASP SLACK - top10-for-llm<br>• #ml-risk-top5<br>• #project-ai-community<br>• #project-mlsec-top10<br>• #team-llm_ai-secgov<br>• #team-llm-redteam<br>• #team-llm-v2-brainstorm |
- Awesome LLM Security
- PWNAI
- AiSec_X_Feed
- LVE_Project
- Lakera AI Security resource hub
- llm-testing-findings
- llm-testing-findings
-
📚 Books
- The Developer's Playbook for Large Language Model Security
- Adversarial AI Attacks, Mitigations, and Defense Strategies: A cybersecurity professional's guide to AI attacks, threat modeling, and securing AI with MLSecOps
- Generative AI Security: Theories and Practices (Future of Business and Finance) - depth exploration of security theories, laws, terms and practices in Generative AI |
-
🏗 Frameworks
Programming Languages
Categories
RAG Security
30
Threat Modeling
29
LLMSecOps Life Cycle
20
PINT Benchmark scores (by lakera)
20
PoC
19
Study resource
19
3 types of LLM architecture
18
OPS
8
🌐 Community
8
DATA
7
Agentic security
5
Jailbreaks
5
📊 Community research articles
4
🎓 Tutorials
3
📚 Books
3
🏗 Frameworks
2
Architecture risks
1
LLM Intrpretability
1
Watermarking
1
Monitoring
1
Sub Categories
Keywords
llm
13
security
6
jailbreak
3
llm-security
3
prompt-injection
3
large-language-models
3
ai
3
machine-learning
3
alignment
2
gpt-4
2
chatgpt
2
hallucinations
2
awesome-list
2
nlp
2
genai
2
observability
1
nlg
1
prompt-engineering
1
rag
1
retrieval-augmented-generation
1
hacking
1
trustworthy-ai
1
agents
1
watermark
1
toolkit
1
llm-agent
1
llm-benchmarking
1
benchmark
1
transformers-models
1
llm-prompting
1
cybersecurity
1
templates
1
reporting
1
penetration-testing
1
gpt
1
awesome
1
research-paper
1
research
1
aisecurity
1
backdoor-attacks
1
ai-security
1
reading-list
1
llm-safety
1
deep-learning
1
ai-safety
1
adversarial-attacks
1
generative-ai
1
vlm
1
safety
1
multi-modal
1