Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
LLM-security-and-privacy
LLM security and privacy
https://github.com/briland/LLM-security-and-privacy
Last synced: 1 day ago
JSON representation
-
News Articles, Blog Posts, and Talks
- Free AI Programs Prone to Security Risks, Researchers Say
- Free AI Programs Prone to Security Risks, Researchers Say
- Free AI Programs Prone to Security Risks, Researchers Say
- Free AI Programs Prone to Security Risks, Researchers Say
- Free AI Programs Prone to Security Risks, Researchers Say
- Free AI Programs Prone to Security Risks, Researchers Say
- Free AI Programs Prone to Security Risks, Researchers Say
- Is Generative AI Dangerous?
- Adversarial examples in the age of ChatGPT
- LLMs in Security: Demos vs Deployment?
- Free AI Programs Prone to Security Risks, Researchers Say
- Why 'Good AI' Is Likely The Antidote To The New Era Of AI Cybercrime
- Meet PassGPT, the AI Trained on Millions of Leaked Passwords
- Free AI Programs Prone to Security Risks, Researchers Say
- Free AI Programs Prone to Security Risks, Researchers Say
- Free AI Programs Prone to Security Risks, Researchers Say
- Free AI Programs Prone to Security Risks, Researchers Say
- Free AI Programs Prone to Security Risks, Researchers Say
- Free AI Programs Prone to Security Risks, Researchers Say
- Free AI Programs Prone to Security Risks, Researchers Say
- Free AI Programs Prone to Security Risks, Researchers Say
- Free AI Programs Prone to Security Risks, Researchers Say
- Free AI Programs Prone to Security Risks, Researchers Say
- Free AI Programs Prone to Security Risks, Researchers Say
-
Papers
- Not what you've signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection - print | 2023 | Prompt Injection | [![Code](https://img.shields.io/badge/GitHub-black?logo=github&logoColor=white)](https://github.com/greshake/llm-security) | TBD |
- LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins - print | 2023 | General | N/A | TBD |
- InjectAgent: Benchmarking Indirect Prompt Injections in Tool-Integrated Large Language Model Agents - print | 2024 | Prompt Injection | N/A | TBD |
- LLM Agents can Autonomously Hack Websites - print | 2024 | Applications | N/A | TBD |
- An Overview of Catastrophic AI Risks - print | 2023 | General | N/A | TBD |
- Use of LLMs for Illicit Purposes: Threats, Prevention Measures, and Vulnerabilities - print | 2023 | General | N/A | TBD |
- LLM Censorship: A Machine Learning Challenge or a Computer Security Problem? - print | 2023 | General | N/A | TBD |
- Beyond the Safeguards: Exploring the Security Risks of ChatGPT - print | 2023 | General | N/A | TBD |
- Prompt Injection attack against LLM-integrated Applications - print | 2023 | Prompt Injection | N/A | TBD |
- Identifying and Mitigating the Security Risks of Generative AI - print | 2023 | General | N/A | TBD |
- PassGPT: Password Modeling and (Guided) Generation with Large Language Models - black?logo=github&logoColor=white)](https://github.com/javirandor/passgpt) | TBD |
- Harnessing GPT-4 for generation of cybersecurity GRC policies: A focus on ransomware attack mitigation - and-security) | 2023 | Applications | N/A | TBD |
- Not what you've signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection - print | 2023 | Prompt Injection | [![Code](https://img.shields.io/badge/GitHub-black?logo=github&logoColor=white)](https://github.com/greshake/llm-security) | TBD |
- Examining Zero-Shot Vulnerability Repair with Large Language Models - security.org/TC/SP2023/program-papers.html) | 2023 | Applications | N/A | TBD |
- LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins - print | 2023 | General | N/A | TBD |
- Chain-of-Verification Reduces Hallucination in Large Language Models - print | 2023 | Hallucinations | N/A | TBD |
- Pop Quiz! Can a Large Language Model Help With Reverse Engineering? - print | 2022 | Applications | N/A | TBD |
- Extracting Training Data from Large Language Models - sessions) | 2021 | Data Extraction | [![Code](https://img.shields.io/badge/GitHub-black?logo=github&logoColor=white)](https://github.com/ftramer/LM_Memorization) | TBD |
- Here Comes The AI Worm: Unleashing Zero-click Worms that Target GenAI-Powered Applications - print | 2024 | Prompt-Injection | [![Code](https://img.shields.io/badge/GitHub-black?logo=github&logoColor=white)](https://github.com/StavC/ComPromptMized) | TBD|
- CLIFF: Contrastive Learning for Improving Faithfulness and Factuality in Abstractive Summarization - main/) | 2021 | Hallucinations | [![Code](https://img.shields.io/badge/GitHub-black?logo=github&logoColor=white)](https://github.com/ShuyangCao/cliff_summ) | TBD|
- Use of LLMs for Illicit Purposes: Threats, Prevention Measures, and Vulnerabilities - print | 2023 | General | N/A | TBD |
- LLM Censorship: A Machine Learning Challenge or a Computer Security Problem? - print | 2023 | General | N/A | TBD |
- Examining Zero-Shot Vulnerability Repair with Large Language Models - security.org/TC/SP2023/program-papers.html) | 2023 | Applications | N/A | TBD |
-
Tools
-
Frameworks & Taxonomies
Programming Languages
Sub Categories