{"id":13788616,"url":"https://github.com/controllability/jailbreak-evaluation","last_synced_at":"2025-05-12T03:30:32.716Z","repository":{"id":229183005,"uuid":"774990581","full_name":"controllability/jailbreak-evaluation","owner":"controllability","description":"The jailbreak-evaluation is an easy-to-use Python package for language model jailbreak evaluation.","archived":false,"fork":false,"pushed_at":"2024-11-04T16:07:47.000Z","size":402,"stargazers_count":22,"open_issues_count":0,"forks_count":6,"subscribers_count":0,"default_branch":"main","last_synced_at":"2025-04-27T04:05:25.821Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://arxiv.org/abs/2404.06407","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/controllability.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null}},"created_at":"2024-03-20T15:09:38.000Z","updated_at":"2025-03-29T04:53:11.000Z","dependencies_parsed_at":"2024-04-15T05:18:18.560Z","dependency_job_id":null,"html_url":"https://github.com/controllability/jailbreak-evaluation","commit_stats":null,"previous_names":["controllability/jailbreak-evaluation"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/controllability%2Fjailbreak-evaluation","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/controllability%2Fjailbreak-evaluation/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/controllability%2Fjailbreak-evaluation/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/controllability%2Fjailbreak-evaluation/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/controllability","download_url":"https://codeload.github.com/controllability/jailbreak-evaluation/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253667918,"owners_count":21944939,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-03T21:00:51.075Z","updated_at":"2025-05-12T03:30:32.354Z","avatar_url":"https://github.com/controllability.png","language":"Python","readme":"\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://raw.githubusercontent.com/controllability/jailbreak-evaluation/main/logo.png\"\u003e\n\u003c/p\u003e\n\n[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](./LICENSE)\n![PyPI](https://img.shields.io/pypi/v/jailbreak-evaluation.svg)\n![GitHub stars](https://img.shields.io/github/stars/controllability/jailbreak-evaluation.svg)\n![GitHub forks](https://img.shields.io/github/forks/controllability/jailbreak-evaluation.svg)\n\nThe jailbreak-evaluation is an easy-to-use Python package for language model jailbreak evaluation.\nThe jailbreak-evaluation is designed for comprehensive and accurate evaluation of language model jailbreak attempts.\nCurrently, jailbreak-evaluation support evaluating a language model jailbreak attempt on multiple metrics: **Safeguard Violation** and **Relative Truthfulness**.\n\nThis is the official package repository for \"[Rethinking How to Evaluate Language Model Jailbreak](https://arxiv.org/abs/2404.06407)\", by Hongyu Cai, Arjun Arunasalam, Leo Y. Lin, Antonio Bianchi, and Z. Berkay Celik.\n\n## Installation\n\n**Note on PyTorch and FastChat Installation**:\nThe jailbreak-evaluation depends on [PyTorch](https://pytorch.org/get-started/locally/) and [FastChat](https://github.com/lm-sys/FastChat?tab=readme-ov-file#install).\nTo streamline the installation, jailbreak-evaluation does **NOT** install these libraries for you.\n\nIt is recommended to use **pip** for installation.\nPlease make sure **the latest version** is installed, as jailbreak-evaluation is updated frequently:\n\n```bash\npython -m pip install --upgrade pip\n\n# Please install the PyTorch: https://pytorch.org/get-started/locally/\n# Please install the FastChat: https://github.com/lm-sys/FastChat?tab=readme-ov-file#install\n\n# Install the jailbreak-evaluation\npython -m pip install -U jailbreak-evaluation\n\n# Initialize the Natural Language Toolkit (NLTK)\npython -c \"import nltk; nltk.download('punkt')\"\n```\n\n## Quick Start\nEvaluate a Language Model Jailbreak Attempt Using jailbreak-evaluation.\nYou should have an OpenAI API key to use the jailbreak-evaluation, which can be obtained from [OpenAI Platform](https://platform.openai.com/api-keys).\n\n```python\nfrom jailbreak_evaluation import MultifacetedEvaluation\n\n# Initialize the jailbreak evaluation\nopenai_api_key = \"your openai_api_key\"\nmultifaceted_evaluation = MultifacetedEvaluation(openai_api_key)\n\n# Construct a jailbreak attempt example\nintent = \"intent\"\nresponse = \"response\"\n\n# Evaluate the above jailbreak attempt example\nresult = multifaceted_evaluation.evaluate(intent, response)\n\nprint(\"The evaluation result for all levels:\")\nprint(result)\n\nprint(\"The evaluation result for combination level, which is our best accurate result:\")\nprint(result.combination_level)\n```\n\n## Contributing\n\nPlease let us know if you encounter a bug or have any suggestions by [filing an issue](https://github.com/controllability/jailbreak-evaluation/issues).\n\nWe welcome all contributions from bug fixes to new features and extensions.\n\nWe expect all contributions discussed in the issue tracker and going through PRs.\n\n## Cite\n\nIf you use jailbreak-evaluation in a scientific publication, we would appreciate citations to the following paper:\n```\n@article{cai2024rethinking,\n  title={Rethinking How to Evaluate Language Model Jailbreak}, \n  author={Hongyu Cai and Arjun Arunasalam and Leo Y. Lin and Antonio Bianchi and Z. Berkay Celik},\n  year={2024},\n  journal={arXiv}\n}\n```\n\n## The Team\n\nThe jailbreak-evaluation is developed and maintained by [PurSec Lab](https://pursec.cs.purdue.edu/).\n\n## License\n\nThe jailbreak-evaluation uses Apache License 2.0.\n","funding_links":[],"categories":["Tools","Model Testing"],"sub_categories":["Survey"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcontrollability%2Fjailbreak-evaluation","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcontrollability%2Fjailbreak-evaluation","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcontrollability%2Fjailbreak-evaluation/lists"}