https://github.com/promptslab/llm-prompt-vulnerabilities
Prompts Methods to find the vulnerabilities in Generative Models
https://github.com/promptslab/llm-prompt-vulnerabilities
chatgpt gpt-3 gpt-3-prompts large-language-models openai openai-api prompt prompt-engineering prompt-learning prompt-tuning prompt-vulnerability prompting prompts
Last synced: 4 months ago
JSON representation
Prompts Methods to find the vulnerabilities in Generative Models
- Host: GitHub
- URL: https://github.com/promptslab/llm-prompt-vulnerabilities
- Owner: promptslab
- Created: 2023-02-12T07:53:41.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2023-02-23T22:43:45.000Z (over 2 years ago)
- Last Synced: 2025-04-08T23:55:56.294Z (6 months ago)
- Topics: chatgpt, gpt-3, gpt-3-prompts, large-language-models, openai, openai-api, prompt, prompt-engineering, prompt-learning, prompt-tuning, prompt-vulnerability, prompting, prompts
- Homepage:
- Size: 6.84 KB
- Stars: 18
- Watchers: 2
- Forks: 3
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
![]()
LLM & Prompt Vulnerabilities
Finding and documentating vulnerabilities in Generative Models based on prompt-engineering
![]()
![]()
| Name | Description | proof |
| :-------------------- | :----------: | :----------: |
| **Prompt In the Middle (PITM)?** | Injecting prompt to access other's output | [[Proof]](https://sharegpt.com/c/nrCPDzJ) |
| **Nested Prompt Attack (Need a better name :D)** | While Providing nested prompts, the model ignores the initial instructions | [[Proof]](https://sharegpt.com/c/BWyhgyN) |