Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
Awesome-LLMSecOps
LLM | Security | Operations in one github repo with good links and pictures.
https://github.com/wearetyomsmnv/Awesome-LLMSecOps
Last synced: 4 days ago
JSON representation
-
3 types of LLM architecture
-
LLMSecOps Life Cycle
-
PINT Benchmark scores (by lakera)
- image
- image
- fmops/distilbert-prompt-injection - 06-12 |
- protectai/deberta-v3-base-prompt-injection-v2 - 06-12 |
- Azure AI Prompt Shield for User Prompts - 04-05 |
- Epivolis/Hyperion - 06-12 |
- Myadav/setfit-prompt-injection-MiniLM-L3-v2 - 06-12 |
- stanford university
- Meta Prompt Guard - 07-26 |
- protectai/deberta-v3-base-prompt-injection - 06-12 |
- WhyLabs LangKit - 06-12 |
- deepset/deberta-v3-base-injection - 06-12 |
- Lakera Guard - 06-12 |
- this
- image
-
RAG Security
- How RAG Poisoning Made LLaMA3 Racist
- image
- How RAG Poisoning Made LLaMA3 Racist
- image
- How RAG Poisoning Made LLaMA3 Racist
- Security Risks in RAG - Augmented Generation (RAG) |
- Adversarial AI - RAG Attacks and Mitigations
- PoisonedRAG
- ConfusedPilot: Compromising Enterprise Information Integrity and Confidentiality with Copilot for Microsoft 365
- image
-
Threat Modeling
-
Study resource
-
OPS
-
Architecture risks
-
Jailbreaks
-
Monitoring
-
Watermarking
- MarkLLM - Source Toolkit for LLM Watermarking. |
-
Agentic security
- invariant - ai/invariant?style=social) |
- AgentBench
-
PoC
- Visual Adversarial Examples - Adversarial-Examples-Jailbreak-Large-Language-Models?style=social) |
- Weak-to-Strong Generalization - to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision | ![GitHub stars](https://img.shields.io/github/stars/XuandongZhao/weak-to-strong?style=social) |
- Image Hijacks - based hijacks of large language models | ![GitHub stars](https://img.shields.io/github/stars/euanong/image-hijacks?style=social) |
- CipherChat
- LLMs Finetuning Safety - tuning large language models | ![GitHub stars](https://img.shields.io/github/stars/LLM-Tuning-Safety/LLMs-Finetuning-Safety?style=social) |
- Virtual Prompt Injection - prompt-injection?style=social) |
- FigStep - language Models via Typographic Visual Prompts | ![GitHub stars](https://img.shields.io/github/stars/ThuCCSLab/FigStep?style=social) |
- stealing-part-lm-supplementary - part-lm-supplementary?style=social) |
- Hallucination-Attack - YuanGroup/Hallucination-Attack?style=social) |
- llm-hallucination-survey - hallucination-survey?style=social) |
- LMSanitator - wenlong/LMSanitator?style=social) |
- Imperio - TASR/Imperio?style=social) |
- Backdoor Attacks on Fine-tuned LLaMA - tuned LLaMA Models | ![GitHub stars](https://img.shields.io/github/stars/naimul011/backdoor_attacks_on_fine-tuned_llama?style=social) |
- CBA - Based Authentication for LLM Security | ![GitHub stars](https://img.shields.io/github/stars/MiracleHH/CBA?style=social) |
- MuScleLoRA - scenario Backdoor Fine-tuning of LLMs | ![GitHub stars](https://img.shields.io/github/stars/ZrW00/MuScleLoRA?style=social) |
- BadActs
- TrojText - ML-Research/TrojText?style=social) |
- AnyDoor - sg/AnyDoor?style=social) |
- PromptWare - powered Applications are Vulnerable to PromptWares | ![GitHub stars](https://img.shields.io/github/stars/StavC/PromptWares?style=social) |
-
📊 Community research articles
-
🎓 Tutorials
-
DATA
-
🌐 Community
- OWASP SLACK - top10-for-llm<br>• #ml-risk-top5<br>• #project-ai-community<br>• #project-mlsec-top10<br>• #team-llm_ai-secgov<br>• #team-llm-redteam<br>• #team-llm-v2-brainstorm |
- Awesome LLM Security
- PWNAI
- AiSec_X_Feed
- LVE_Project
- Lakera AI Security resource hub
- llm-testing-findings
-
📚 Books
- The Developer's Playbook for Large Language Model Security
- Generative AI Security: Theories and Practices (Future of Business and Finance) - depth exploration of security theories, laws, terms and practices in Generative AI |
- Adversarial AI Attacks, Mitigations, and Defense Strategies: A cybersecurity professional's guide to AI attacks, threat modeling, and securing AI with MLSecOps
Programming Languages
Categories
PoC
19
PINT Benchmark scores (by lakera)
15
Study resource
14
RAG Security
10
Threat Modeling
9
DATA
7
🌐 Community
7
LLMSecOps Life Cycle
5
Jailbreaks
4
📊 Community research articles
4
📚 Books
3
OPS
3
3 types of LLM architecture
3
🎓 Tutorials
3
Agentic security
2
Watermarking
1
Architecture risks
1
Monitoring
1
Sub Categories
Keywords
llm
13
security
6
jailbreak
3
llm-security
3
prompt-injection
3
ai
3
machine-learning
3
alignment
2
gpt-4
2
chatgpt
2
hallucinations
2
nlp
2
genai
2
large-language-models
2
nlg
1
observability
1
prompt-engineering
1
rag
1
retrieval-augmented-generation
1
hacking
1
trustworthy-ai
1
agents
1
watermark
1
toolkit
1
llm-agent
1
llm-benchmarking
1
benchmark
1
transformers-models
1
llm-prompting
1
cybersecurity
1
templates
1
reporting
1
penetration-testing
1
gpt
1
awesome-list
1
awesome
1
research-paper
1
research
1
aisecurity
1
backdoor-attacks
1
ai-security
1
llm-safety
1
deep-learning
1
ai-safety
1
adversarial-attacks
1
generative-ai
1
vlm
1
safety
1
multi-modal
1
llm-finetuning
1