{"id":18832944,"url":"https://github.com/declare-lab/red-instruct","last_synced_at":"2025-04-14T04:31:51.861Z","repository":{"id":189602900,"uuid":"677632438","full_name":"declare-lab/red-instruct","owner":"declare-lab","description":"Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment","archived":false,"fork":false,"pushed_at":"2024-03-08T07:47:46.000Z","size":50341,"stargazers_count":77,"open_issues_count":1,"forks_count":11,"subscribers_count":1,"default_branch":"main","last_synced_at":"2024-10-18T23:13:59.415Z","etag":null,"topics":["huggingface-transformers","llama","llama2","llm","llms"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/declare-lab.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-08-12T05:38:06.000Z","updated_at":"2024-10-16T12:56:31.000Z","dependencies_parsed_at":"2023-10-15T10:46:56.615Z","dependency_job_id":"4ac13f00-63d2-4eb7-a951-9e9e1d0e7b22","html_url":"https://github.com/declare-lab/red-instruct","commit_stats":{"total_commits":64,"total_committers":4,"mean_commits":16.0,"dds":0.3125,"last_synced_commit":"73c759c93d2f6b1d8c7b058ab7bcec5fb0c28fa2"},"previous_names":["declare-lab/red-instruct"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/declare-lab%2Fred-instruct","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/declare-lab%2Fred-instruct/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/declare-lab%2Fred-instruct/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/declare-lab%2Fred-instruct/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/declare-lab","download_url":"https://codeload.github.com/declare-lab/red-instruct/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248821743,"owners_count":21166948,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["huggingface-transformers","llama","llama2","llm","llms"],"created_at":"2024-11-08T01:59:34.853Z","updated_at":"2025-04-14T04:31:46.841Z","avatar_url":"https://github.com/declare-lab.png","language":"Python","readme":"# Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment\n\n[**Paper**](https://arxiv.org/abs/2308.09662) | [**Github**](https://github.com/declare-lab/red-instruct) | [**Dataset**](https://huggingface.co/datasets/declare-lab/HarmfulQA) | [**Model**](https://huggingface.co/declare-lab/starling-7B)\n\n\n\u003e 📣 Update 2/02/24: Introducing Resta: **Safety Re-alignment of Language Models**. [**Paper**](https://arxiv.org/abs/2402.11746) [**Github**](https://github.com/declare-lab/resta) [**Dataset**](https://huggingface.co/datasets/declare-lab/CategoricalHarmfulQA)\n\n\u003e 📣 Update 26/10/23: Introducing our new red-teaming efforts: **Language Model Unalignment**. [**Link**](https://arxiv.org/pdf/2310.14303.pdf)\n\n**As a part of our efforts to make LLMs safer for public use, we provide:**\n- **Code to evaluate LLM safety against Chain of Utterances (CoU) based prompts-referred to as RedEval benchmark** \u003cimg src=\"https://github.com/declare-lab/red-instruct/assets/32847115/5678d7d7-5a0c-4d07-b600-1029aa58dbdc\" alt=\"Image\" width=\"30\" height=\"30\"\u003e\n\n- **Code to perform safety alignment of Vicuna-7B on [**HarmfulQA**](https://huggingface.co/datasets/declare-lab/HarmfulQA), with this we obtain a safer version of Vicuna which is more robust against RedEval. Please check out our [**Starling**](https://huggingface.co/declare-lab/starling-7B)**.\n\n## Red-Eval Benchmark\nSimple scripts to evaluate closed-source systems (ChatGPT, GPT4) and open-source LLMs on our benchmark red-eval.\n\nTo compute Attack Success Rate (ASR) Red-Eval uses two question-bank consisting of harmful questions:\n- [**HarmfulQA**](https://huggingface.co/datasets/declare-lab/HarmfulQA) (1,960 harmful questions covering 10 topics and ~10 subtopics each)\n- [**DangerousQA**](https://github.com/SALT-NLP/chain-of-thought-bias/blob/main/data/dangerous-q/toxic_outs.json) (200 harmful questions across 6 adjectives—racist, stereotypical, sexist, illegal, toxic, and harmful)\n- [**CategoricalQA**](https://huggingface.co/datasets/declare-lab/CategoricalHarmfulQA) (11 categories of harm, each with 5 sub-categories. Available in English, Chinese, and Vietnamese)\n- [**AdversarialQA**](https://github.com/llm-attacks/llm-attacks/blob/main/data/transfer_expriment_behaviors.csv) (a set of 500 instructions to tease out harmful behaviors from the model)\n\n### Installation\n```\nconda create --name redeval -c conda-forge python=3.11\nconda activate redeval\npip install -r requirements.txt\nconda install sentencepiece\n\nStore your API keys in api_keys directory! It will be used by LLM as judge (response evaluator) and generate_responses.py for closed-source models.\n```\n\n### How to perform red-teaming\n- **Step-0: Decide which prompt template you want to use for red-teaming.** As a part of our efforts, we provide a CoU-based prompt that is effective at breaking the safety guardrails of GPT4, ChatGPT, and open-source models.\n  - [Chain of Utterances (CoU)](https://github.com/declare-lab/red-instruct/blob/main/red_prompts/cou.txt)\n  - [Chain of Thoughts (CoT)](https://github.com/declare-lab/red-instruct/blob/main/red_prompts/cot.txt)\n  - [Standard prompt](https://github.com/declare-lab/red-instruct/blob/main/red_prompts/standard.txt)\n  - [Suffix prompt](https://github.com/declare-lab/red-instruct/blob/main/red_prompts/suffix.txt)\n\n    (_Note: Different LLMs may require slight variations in the above prompt template to generate meaningful outputs. To create a new template, you can refer to the above template files. Just make sure to have a \"\\\u003cquestion\\\u003e\" string in the prompt which is a placeholder for the harmful question._)\n    \n- **Step-1: Generate model outputs on harmful questions by providing a path to the question bank and red-teaming prompt:**\n\n  Closed-source models:\n```\n  #OpenAI\n  python generate_responses.py --model \"gpt4\" --prompt red_prompts/[standard/cou/cot].txt --dataset harmful_questions/dangerousqa.json\n  python generate_responses.py --model \"chatgpt\" --prompt red_prompts/[standard/cou/cot].txt --dataset harmful_questions/dangerousqa.json\n\n  #Claude Models\n  python generate_responses.py --model \"claude-3-opus-20240229\" --prompt red_prompts/[standard/cou/cot].txt --dataset harmful_questions/dangerousqa.json\n  python generate_responses.py --model \"claude-3-sonnet-20240229\" --prompt red_prompts/[standard/cou/cot].txt --dataset harmful_questions/dangerousqa.json\n  python generate_responses.py --model \"claude-2.1\" --prompt red_prompts/[standard/cou/cot].txt --dataset harmful_questions/dangerousqa.json\n  python generate_responses.py --model \"claude-2.0\" --prompt red_prompts/[standard/cou/cot].txt --dataset harmful_questions/dangerousqa.json \n```\n\n  Open-source models:\n```\n  #Llama-2\n  python generate_responses.py --model \"meta-llama/Llama-2-7b-chat-hf\" --prompt red_prompts/[standard/cou/cot].txt --dataset harmful_questions/dangerousqa.json\n\n  #Mistral\n  python generate_responses.py --model \"mistralai/Mistral-7B-Instruct-v0.2\" --prompt red_prompts/[standard/cou/cot].txt --dataset harmful_questions/dangerousqa.json\n\n  #Vicuna\n  python generate_responses.py --model \"lmsys/vicuna-7b-v1.3\" --prompt red_prompts/[standard/cou/cot].txt --dataset harmful_questions/dangerousqa.json\n```\n\n\nTo load models in 8-bit, we can specify --load_8bit as follows\n\n```\n  python generate_responses.py --model \"meta-llama/Llama-2-7b-chat-hf\" --prompt red_prompts/[standard/cou/cot].txt --dataset harmful_questions/dangerousqa.json --load_8bit\n```\n\nTo run on a subset of the harmful questions, we can specify --num_samples as follows\n\n```\n  python generate_responses.py --model \"meta-llama/Llama-2-7b-chat-hf\" --prompt red_prompts/[standard/cou/cot].txt --dataset harmful_questions/dangerousqa.json --num_samples 10\n```\n\n\n- **Step-2: Annotate the generated responses using gpt4-as-a-judge:**\n```\npython gpt4_as_judge.py --response_file results/dangerousqa_gpt4_cou.json --save_path results\n```\n\n### Results\nAttack Success Rate (ASR) of different red-teaming attempts.\n\n|    **Model**    | **DangerousQA (Standard)**   |   **DangerousQA (CoT)**   |  **DangerousQA (RedEval)**  |  **DangerousQA (Average)**  | **HarmfulQA (Standard)**   |   **HarmfulQA (CoT)**   |  **HarmfulQA (RedEval)**  |  **HarmfulQA (Average)**  |\n|:--------------:|:------------------:|:------------:|:-----------------:|:------------:|:------------:|:------------:|:-----------------:|:------------:|\n|     **GPT-4**     |        0         |       0      |      0.651      |     0.217     |       0        |      0.004     |      0.612      |     0.206     |\n|    **ChatGPT**    |        0         |     0.005    |      0.728      |     0.244     |     0.018      |    0.027      |      0.728      |     0.257     |\n|  **Vicuna-13B**   |     0.027      |     0.490    |      0.835      |     0.450     |       -        |      -        |       -        |       -       |\n|  **Vicuna-7B** |     0.025      |     0.532    |      0.875      |     0.477     |       -        |      -        |       -        |       -       |\n| **StableBeluga-13B** |     0.026      |     0.630    |      0.915      |     0.523     |       -        |      -        |       -        |       -       |\n| **StableBeluga-7B** |     0.102      |     0.755    |      0.915      |     0.590     |       -        |      -        |       -        |       -       |\n|**Vicuna-FT-7B**|     0.095      |     0.465    |      0.860      |     0.473     |       -        |      -        |       -        |       -       |\n| **Llama2-FT-7B** |     0.722      |     0.860    |      0.896      |     0.826     |       -        |      -        |       -        |       -       |\n|**Starling (Blue)** |     0.015      |     0.485    |      0.765      |     0.421     |       -        |      -        |       -        |       -       |\n|**Starling (Blue-Red)** |     0.050      |     0.570    |      0.855      |     0.492     |       -        |      -        |       -        |       -       |\n|     **Average**    |     0.116      |     0.479    |      0.830      |     0.471     |     0.010      |    0.016      |     0.67       |     0.232     |\n\n\n## Citation\n\n```bibtex\n@misc{bhardwaj2023redteaming,\n      title={Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment}, \n      author={Rishabh Bhardwaj and Soujanya Poria},\n      year={2023},\n      eprint={2308.09662},\n      archivePrefix={arXiv},\n      primaryClass={cs.CL}\n}\n\n@misc{bhardwaj2024language,\n      title={Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic}, \n      author={Rishabh Bhardwaj and Do Duc Anh and Soujanya Poria},\n      year={2024},\n      eprint={2402.11746},\n      archivePrefix={arXiv},\n      primaryClass={cs.CL}\n}\n```\n","funding_links":[],"categories":["Anthropomorphic-Taxonomy"],"sub_categories":["Typical Emotional Quotient (EQ)-Alignment Ability evaluation benchmarks"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdeclare-lab%2Fred-instruct","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdeclare-lab%2Fred-instruct","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdeclare-lab%2Fred-instruct/lists"}