https://github.com/elder-plinius/CL4R1T4S
SYSTEM PROMPT TRANSPARENCY FOR ALL - CHATGPT, GEMINI, GROK, CLAUDE, PERPLEXITY, CURSOR, WINDSURF, DEVIN, REPLIT, AND MORE!
https://github.com/elder-plinius/CL4R1T4S
agents ai ai-ethics chatgpt cursor gemini google grok leak openai prompt-engineering prompts red-team red-teaming system-info system-prompts tools transparency windsurf xai
Last synced: 28 days ago
JSON representation
SYSTEM PROMPT TRANSPARENCY FOR ALL - CHATGPT, GEMINI, GROK, CLAUDE, PERPLEXITY, CURSOR, WINDSURF, DEVIN, REPLIT, AND MORE!
- Host: GitHub
- URL: https://github.com/elder-plinius/CL4R1T4S
- Owner: elder-plinius
- License: agpl-3.0
- Created: 2025-03-04T20:03:16.000Z (3 months ago)
- Default Branch: main
- Last Pushed: 2025-04-24T04:26:25.000Z (about 1 month ago)
- Last Synced: 2025-04-24T05:26:03.012Z (about 1 month ago)
- Topics: agents, ai, ai-ethics, chatgpt, cursor, gemini, google, grok, leak, openai, prompt-engineering, prompts, red-team, red-teaming, system-info, system-prompts, tools, transparency, windsurf, xai
- Homepage:
- Size: 211 KB
- Stars: 1,346
- Watchers: 51
- Forks: 405
- Open Issues: 4
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-chatgpt - elder-plinius/CL4R1T4S - CL4R1T4S is a project that promotes transparency by collecting and sharing the full system prompts and guidelines used by major AI models to help users understand and trust AI outputs. (Documentation and examples / Information about prompt engineering)
README
# CL4R1T4S
SYSTEM PROMPT TRANSPARENCY FOR ALL! Full system prompts, guidelines, and tools from OpenAI, Google, Anthropic, xAI, Perplexity, Cursor, Windsurf, Devin, Manus, Replit, and more – virtually all major AI models + agents!
📌 Why This Exists
"In order to trust the output, one must understand the input."
AI labs shape how models behave using massive, unseen prompt scaffolds. Because AI is a trusted external intelligence layer for a growing number of humans, these hidden instructions also affect the perceptions and behavior of the public.
These prompts define:
What AIs can’t say
What personas and functions they’re forced to follow
How they’re told to lie, refuse, or redirect
And what ethical/political frames are baked in by default
If you're using an AI without knowing its system prompt,
you’re not talking to intelligence — you’re talking to a shadow-puppet.CL4R1T4S is here to fix that.
🛠 Contribute
Leak, extract, or reverse-engineer something? Good.
Send a pull request with:✅ Model name/version
🗓 Date of extraction (if known)
🧾 Context / notes (optional but helpful)
Or hit up @elder_plinius