{"id":13919541,"url":"https://github.com/brisacoder/PromptGuardian","last_synced_at":"2025-07-18T17:31:18.324Z","repository":{"id":218470424,"uuid":"746181314","full_name":"BenderScript/PromptGuardian","owner":"BenderScript","description":"All-in-one App that Checks LLM prompts for Injection, Data Leaks and Malicious URLs.","archived":false,"fork":false,"pushed_at":"2024-03-27T06:54:41.000Z","size":36286,"stargazers_count":3,"open_issues_count":3,"forks_count":1,"subscribers_count":2,"default_branch":"main","last_synced_at":"2024-11-09T22:43:56.887Z","etag":null,"topics":["attacks","injection","llm","openai","prompt"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/BenderScript.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null}},"created_at":"2024-01-21T10:09:17.000Z","updated_at":"2024-06-01T05:01:40.000Z","dependencies_parsed_at":"2024-03-26T17:51:04.488Z","dependency_job_id":null,"html_url":"https://github.com/BenderScript/PromptGuardian","commit_stats":null,"previous_names":["benderscript/promptguardian"],"tags_count":3,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/BenderScript%2FPromptGuardian","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/BenderScript%2FPromptGuardian/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/BenderScript%2FPromptGuardian/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/BenderScript%2FPromptGuardian/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/BenderScript","download_url":"https://codeload.github.com/BenderScript/PromptGuardian/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":226431418,"owners_count":17623980,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["attacks","injection","llm","openai","prompt"],"created_at":"2024-08-07T05:06:59.522Z","updated_at":"2025-07-18T17:31:12.860Z","avatar_url":"https://github.com/BenderScript.png","language":"Python","readme":"# Prompt Guardian\n\nThis is application that will check your prompt against a database of \nmalicious URLs and whether it is considered prompt injection attack.\n\nIt uses OpenAI to check prompt injection attacks. Why? Because it's fun.\nSeriously, it has very good guard rails to prevent you from doing dumb things. Therefore, \nif you use another LLM, your prompt will be checked against OpenAI.\n\nFinally it checks your prompt for DLP (Data Loss Prevention). It uses an abstracted\nAPI to check your prompt against a database.\n\n## OpenAI API Key\n\nYou should have an environment variable called `OPENAI_API_KEY` with your OpenAI API key. Alternatively,\nyou can create a `.env` file on the project root directory with the following content:\n\n```bash\nOPENAI_API_KEY=\u003cyour-api-key\u003e\n```\n\n## Installing Dependencies\n\n```bash\npip3 install -r requirements.txt\n```\n\n## Running\n\nOn the project root directory, run the following command:\n\n```bash\nuvicorn prompt_guardian.server:prompt_guardian_app --reload --port 9001 \n```\n\nIf Everything goes well, you should see the following page at http://127.0.0.1:9001\n\n\n![Landing page](images/landing.png)\n\n## Testing\n\nSee the demo below where the App checks a prompt with a malicious URL and injection.\n\n![Demo](images/prompt_guardian_demo.gif)\n\n\n\n","funding_links":[],"categories":["🔐 Secure Prompting"],"sub_categories":["Hall Of Fame:"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbrisacoder%2FPromptGuardian","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fbrisacoder%2FPromptGuardian","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbrisacoder%2FPromptGuardian/lists"}