{"id":17789502,"url":"https://github.com/benderscript/owasp_llm_analysis","last_synced_at":"2026-02-12T22:03:43.474Z","repository":{"id":207464759,"uuid":"719292718","full_name":"BenderScript/owasp_llm_analysis","owner":"BenderScript","description":"Large Language Models Security Analsysis","archived":false,"fork":false,"pushed_at":"2023-11-16T18:42:27.000Z","size":82,"stargazers_count":3,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-09-12T18:29:21.013Z","etag":null,"topics":["analysis","llm","owasp","security","top10"],"latest_commit_sha":null,"homepage":"","language":null,"has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/BenderScript.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2023-11-15T21:29:18.000Z","updated_at":"2024-10-21T11:06:09.000Z","dependencies_parsed_at":null,"dependency_job_id":"753dea61-fe40-4205-9e94-f96b8086ade3","html_url":"https://github.com/BenderScript/owasp_llm_analysis","commit_stats":null,"previous_names":["benderscript/owasp_llm_analysis"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/BenderScript/owasp_llm_analysis","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/BenderScript%2Fowasp_llm_analysis","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/BenderScript%2Fowasp_llm_analysis/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/BenderScript%2Fowasp_llm_analysis/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/BenderScript%2Fowasp_llm_analysis/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/BenderScript","download_url":"https://codeload.github.com/BenderScript/owasp_llm_analysis/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/BenderScript%2Fowasp_llm_analysis/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29382885,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-12T20:34:40.886Z","status":"ssl_error","status_checked_at":"2026-02-12T20:23:00.490Z","response_time":55,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["analysis","llm","owasp","security","top10"],"created_at":"2024-10-27T10:32:41.855Z","updated_at":"2026-02-12T22:03:43.469Z","avatar_url":"https://github.com/BenderScript.png","language":null,"readme":"# Large Language Model Security Analysis\n\nWelcome to our repository dedicated to the security considerations of Large Language Models (LLMs). This repository houses a comprehensive collection of documents addressing various aspects of LLM security, application security, and practical use cases.\n\n## Repository Contents\n\n- **[Index of Documentation](./index.md)**: Navigate to all the documents in the repository for in-depth information on LLM security.\n\n## Research Summary\n\nThis repository's research focuses on the security aspects of LLMs, including the risks associated with training data, prompt injections, and application-level security for LLM-based systems like chatbots. It covers the importance of balancing preventive measures, like clean training data, with reactive strategies against risks outlined in the OWASP LLM Top 10. Additionally, it explores specific scenarios like the integration of LLMs in enterprise environments with OpenAI and open-source models, providing insights into unique challenges and security considerations in these contexts.\n\n## Getting Started\n\nTo dive into the specific topics, please refer to the [Index of Documentation](./index.md) which will guide you to the detailed documents.\n\nThank you for visiting our repository!\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbenderscript%2Fowasp_llm_analysis","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fbenderscript%2Fowasp_llm_analysis","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbenderscript%2Fowasp_llm_analysis/lists"}