Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/N372unn32/AI-ML-LLM-security-resources
list of resources for AI/ML/LLM security
https://github.com/N372unn32/AI-ML-LLM-security-resources
ai aisecurity llm ml mlsecurity resources security
Last synced: 4 days ago
JSON representation
list of resources for AI/ML/LLM security
- Host: GitHub
- URL: https://github.com/N372unn32/AI-ML-LLM-security-resources
- Owner: N372unn32
- License: mit
- Created: 2024-03-07T18:24:53.000Z (10 months ago)
- Default Branch: main
- Last Pushed: 2024-03-17T18:18:02.000Z (10 months ago)
- Last Synced: 2024-03-17T19:30:14.550Z (10 months ago)
- Topics: ai, aisecurity, llm, ml, mlsecurity, resources, security
- Homepage:
- Size: 19.5 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- Awesome-LLM4Security - AI/ML/LLM-security-resources
- Awesome-LLM4Security - AI/ML/LLM-security-resources
README
# AI/ML/LLM-security-resources
bookmarks. list of resources for AI/ML/LLM security## Table of Contents
- [Blogs / PPTs / Sites](https://github.com/N372unn32/AI-ML-security-study-resources/edit/main/README.md#blogs--ppts--sites)
- [Courses / Videos](https://github.com/N372unn32/AI-ML-security-study-resources/edit/main/README.md#courses--videos)
- [Books / Papers](https://github.com/N372unn32/AI-ML-security-study-resources/edit/main/README.md#books--papers)
- [Tools](https://github.com/N372unn32/AI-ML-security-study-resources/edit/main/README.md#tools)## Blogs / PPTs / Sites
| Title | Author | Link |
| ----- | ------ | ---- |
| Blogs at DEFCON AI Village| DEFCON AI Village | [aivillage.org](https://aivillage.org/blog/) |
| Zen and the Art of Adversarial Machine Learning | Will Pearce, Giorgio Severi | [blackhat.com](https://i.blackhat.com/EU-21/Thursday/EU-21-Pearce-Zen-And-The-Art-Of-Adversarial-ML.pdf) |
| AI Red Team: Machine Learning Security Training | Will Pearce, Joseph Lucas, Rich Harang and John Irwin | [developer.nvidia.com](https://developer.nvidia.com/blog/ai-red-team-machine-learning-security-training/) |
| NVIDIA AI Red Team: An Introduction | Will Pearce and Joseph Lucas | [developer.nvidia.com](https://developer.nvidia.com/blog/nvidia-ai-red-team-an-introduction/) |
| Increasing transparency in AI security | Mihai Maruseac, Sarah Meiklejohn, Mark Lodato, Google Open Source Security Team (GOSST) | [security.googleblog.com](https://security.googleblog.com/2023/10/increasing-transparency-in-ai-security.html) |
| PIPE - Prompt Injection Primer for Engineers | jthack | [github.com](https://github.com/jthack/PIPE) |
| AI-Powered Fuzzing: Breaking the Bug Hunting Barrier | Dongge Liu, Jonathan Metzman, Oliver Chang, Google Open Source Security Team | [security.googleblog.com](https://security.googleblog.com/2023/08/ai-powered-fuzzing-breaking-bug-hunting.html) |
| Secure AI FrameworkApproach | Google | [services.google.com](https://services.google.com/fh/files/blogs/google_secure_ai_framework_approach.pdf) |
| Securing the AI Pipeline | DAN BROWNE, MUHAMMAD MUNEER | [mandiant.com](https://www.mandiant.com/resources/blog/securing-ai-pipeline) |
| Microsoft’s open automation framework to red team generative AI Systems | Ram Shankar Siva Kumar | [microsoft.com](https://www.microsoft.com/en-us/security/blog/2024/02/22/announcing-microsofts-open-automation-framework-to-red-team-generative-ai-systems/) |
| Microsoft AI Red Team | Microsoft Learn | [learn.microsoft.com](https://learn.microsoft.com/en-us/security/ai-red-team/) |
| OWASP Machine Learning Security Top Ten | OWASP | [owasp.org](https://owasp.org/www-project-machine-learning-security-top-10/) |
| OWASP AI Top Ten | OWASP | [owasp.org](https://owasp.org/www-project-ai-top-ten/) |
| OWASP Top 10 for Large Language Model Applications | OWASP | [owasp.org](https://owasp.org/www-project-top-10-for-large-language-model-applications/) |
| Adversarial ML Threat Matrix | MITRE | [github.com](https://github.com/mitre/advmlthreatmatrix) |
| Welcome to the Offensive ML Playbook | @whitehacksec | [wiki.offsecml.com](https://wiki.offsecml.com/Welcome+to+the+Offensive+ML+Playbook) |## Courses / Videos
| Title | Author | Link |
| ----------- | -------- | ---- |
| AI Application Security: Understanding Prompt Injection Attacks and Mitigations | rez0 | [youtube.com](https://www.youtube.com/watch?v=MxxPbN9GGYE) |
| Red Teaming LLMs with Jupyter Notebooks: A Practical Guide | Pete Bryan | [youtube.com (Timestamp - 2:12)](https://www.youtube.com/watch?v=5CK-hpSYOkQ) |
| Learn from Microsoft’s AI Red Team on how to make your organization safer | Gary Lopez | [brighttalk.com](https://www.brighttalk.com/webcast/10415/607319) |## Books / Papers
| Title | Author | Link |
| ----------- | -------- | ---- |
| Jailbreaking Black Box Large Language Models in Twenty Queries | Patrick Chao, Alexander Robey, Edgar Dobriban, Hamed Hassani, George J. Pappas, Eric Wong | [arxiv.org](https://arxiv.org/abs/2310.08419) |
| Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations | Apostol Vassilev, Alina Oprea, Alie Fordyce, Hyrum Anderson| [nvlpubs.nist.gov](https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-2e2023.pdf) |
| Not with a Bug, But with a Sticker: Attacks on Machine Learning Systems and What To Do About Them | Ram Shankar Siva Kumar, Hyrum Anderson| [amazon.com](https://www.amazon.com/Not-Bug-But-Sticker-Learning/dp/1119883989) |## Tools
| Title | Author | Link |
| ----------- | ------ | ---- |
| Python Risk Identification Tool for generative AI (PyRIT) | Azure | [github.com](https://github.com/Azure/PyRIT) |
| Counterfit | Azure | [github.com](https://github.com/Azure/counterfit) |
| garak, LLM vulnerability scanner | leondz | [github.com](https://github.com/leondz/garak) |