Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/AI-secure/AgentPoison
[NeurIPS 2024] Official implementation for "AgentPoison: Red-teaming LLM Agents via Memory or Knowledge Base Backdoor Poisoning"
https://github.com/AI-secure/AgentPoison
llm-agent red-team retrieval-augmented-generation
Last synced: 10 days ago
JSON representation
[NeurIPS 2024] Official implementation for "AgentPoison: Red-teaming LLM Agents via Memory or Knowledge Base Backdoor Poisoning"
- Host: GitHub
- URL: https://github.com/AI-secure/AgentPoison
- Owner: AI-secure
- License: mit
- Created: 2024-03-22T10:39:10.000Z (9 months ago)
- Default Branch: master
- Last Pushed: 2024-12-02T20:34:51.000Z (10 days ago)
- Last Synced: 2024-12-02T21:32:11.899Z (10 days ago)
- Topics: llm-agent, red-team, retrieval-augmented-generation
- Language: Python
- Homepage: https://billchan226.github.io/AgentPoison
- Size: 569 MB
- Stars: 66
- Watchers: 3
- Forks: 5
- Open Issues: 4
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-MLSecOps - AgentPoison
README
## [AgentPoison: Red-teaming LLM Agents via Memory or Knowledge Base Backdoor Poisoning](https://billchan226.github.io/AgentPoison)
🔥🔥 Recent news please check **[Project page](https://billchan226.github.io/AgentPoison.html)** !
[![Project Page](https://img.shields.io/badge/Project-Page-Green)](https://billchan226.github.io/AgentPoison.html)
[![Arxiv](https://img.shields.io/badge/Paper-Arxiv-red)](https://arxiv.org/pdf/2407.12784)
[![License: MIT](https://img.shields.io/badge/License-MIT-g.svg)](https://opensource.org/licenses/MIT)
[![Hits](https://hits.seeyoufarm.com/api/count/incr/badge.svg?url=https%3A%2F%2Fgithub.com%2FBillChan226%2FAgentPoison&count_bg=%23FFCF00&title_bg=%23555555&icon=&icon_color=%23E7E7E7&title=Visitor&edge_flat=false)](https://hits.seeyoufarm.com)
[![GitHub Stars](https://img.shields.io/github/stars/BillChan226/AgentPoison?style=social)](https://github.com/BillChan226/AgentPoison/stargazers)This repository provides the official PyTorch implementation of the following paper:
> [**AgentPoison: Red-teaming LLM Agents via Memory or Knowledge Base Backdoor Poisoning**]()
> [Zhaorun Chen](https://billchan226.github.io/)1,
> [Zhen Xiang](https://zhenxianglance.github.io/)2,
> [Chaowei Xiao](https://xiaocw11.github.io/) 3,
> [Dawn Song](https://dawnsong.io/) 4,
> [Bo Li](https://aisecure.github.io/)1,2
>
> 1University of Chicago, 2University of Illinois, Urbana-Champaign
3University of Wisconsin, Madison, 4University of California, Berkeley
## :hammer_and_wrench: Installation
To install, run the following commands to install the required packages:
```
git clone https://github.com/BillChan226/AgentPoison.git
cd AgentPoison
conda env create -f environment.yml
conda activate agentpoison
```### RAG Embedder Checkpoints
You can download the embedder checkpoints from the links below then specify the path to the embedder checkpoints in the `algo/config.yaml` file.
| Embedder | HF Checkpoints |
| -------------------- | ------------------- |
| [BERT](https://arxiv.org/pdf/1810.04805) | [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) |
| [DPR](https://arxiv.org/pdf/2004.04906) | [facebook/dpr-question_encoder-single-nq-base](https://huggingface.co/facebook/dpr-question_encoder-single-nq-base) |
| [ANCE](https://arxiv.org/pdf/2007.00808) | [castorini/ance-dpr-question-multi](https://huggingface.co/castorini/ance-dpr-question-multi) |
| [BGE](https://arxiv.org/pdf/2310.07554) | [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) |
| [REALM](https://arxiv.org/pdf/2002.08909) | [google/realm-cc-news-pretrained-embedder](https://huggingface.co/google/realm-cc-news-pretrained-embedder) |
| [ORQA](https://arxiv.org/pdf/1906.00300) | [google/realm-orqa-nq-openqa](https://huggingface.co/google/realm-orqa-nq-openqa) |You can also use custmor embedders (e.g. fine-tuned yourself) as long as you specify their identifier and model path in the [config](algo/config.py).
## :smiling_imp: Trigger Optimization
After setting up the configuration for the embedders, you can run trigger optimization for all three agents using the following command:
```bash
python algo/trigger_optimization.py --agent ad --algo ap --model dpr-ctx_encoder-single-nq-base --save_dir ./results --ppl_filter --target_gradient_guidance --asr_threshold 0.5 --num_adv_passage_tokens 10 --golden_trigger -w -p
```
Specifically, the descriptions of arguments are listed below:| Argument | Example | Description |
| -------------------- | ------------------- | ------------- |
| `--agent` | `ad` | Specify the type of agent to red-team, [`ad`, `qa`, `ehr`]. |
| `--algo` | `ap` | Trigger optimization algorithm to use, [`ap`, `cpa`]. |
| `--model` | `dpr-ctx_encoder-single-nq-base` | Target RAG embedder to optimize, see a complete list above. |
| `--save_dir` | `./result` | Path to save the optimized trigger and procedural plots |
| `--num_iter` | `1000` | Number of iterations to run each gradient optimization |
| `--num_grad_iter` | `30` | Number of gradient accumulation steps |
| `--per_gpu_eval_batch_size` | `64` | Batch size for trigger optimization |
| `--num_cand` | `100` | Number of discrete tokens sampled per optimization |
| `--num_adv_passage_tokens` | `10` | Number of tokens in the trigger sequence |
| `--golden_trigger` | `False` | Whether to start with a golden trigger (will overwrite `--num_adv_passage_tokens`) |
| `--target_gradient_guidance` | `True` | Whether to guide the token update with target model loss |
| `--use_gpt` | `False` | Whether to approximate target model loss via MC sampling |
| `--asr_threshold` | `0.5` | ASR threshold for target model loss |
| `--ppl_filter` | `True` | Whether to enable coherence loss filter for token sampling |
| `--plot` | `False` | Whether to plot the procedural optimization of the embeddings |
| `--report_to_wandb` | `True` | Whether to report the results to wandb |## :robot: Agent Experiment
We have modified the original code for [Agent-Driver](https://github.com/USC-GVL/Agent-Driver), [ReAct-StrategyQA](https://github.com/Jiuzhouh/Uncertainty-Aware-Language-Agent), [EHRAgent](https://github.com/wshi83/EhrAgent) to support more RAG embedders, and add interface for data poisoning. We have provided unified dataset access for all three agents at [here](https://drive.google.com/drive/folders/1WNJlgEZA3El6PNudK_onP7dThMXCY60K?usp=sharing). Specifically, we list the inference command for all three agents.
### :car: Agent-Driver
First download the corresponding dataset from [here](https://drive.google.com/drive/folders/1WNJlgEZA3El6PNudK_onP7dThMXCY60K?usp=sharing) or the original [dataset host](https://drive.google.com/drive/folders/1BjCYr0xLTkLDN9DrloGYlerZQC1EiPie). Put the corresponding dataset in `agentdriver/data`.
Then put the optimized trigger tokens in [here](https://github.com/BillChan226/AgentPoison/blob/485d9702295ac40010b9a692b22adae18071726c/agentdriver/planning/motion_planning.py#L184) and you can also determine more attack parameters in [here](https://github.com/BillChan226/AgentPoison/blob/485d9702295ac40010b9a692b22adae18071726c/agentdriver/planning/motion_planning.py#L187). Specifically, set `attack_or_not` to `False` to get the benign utility under attack.Then run the following script for inference:
```bash
sh scripts/agent_driver/run_inference.sh
```
The motion planning result regarding ASR-r, ASR-a, and ACC will be printed directly at the end of the program. The planned trajectory will be saved to `./result`. Run the following command to get ASR-t:
```bash
sh scripts/agent_driver/run_evaluation.sh
```We provide more options for red-teaming agent-driver that cover **each individual components of an autonomous agent**, including [perception APIs](https://github.com/BillChan226/AgentPoison/blob/485d9702295ac40010b9a692b22adae18071726c/agentdriver/planning/motion_planning.py#L257), [memory module](https://github.com/BillChan226/AgentPoison/blob/485d9702295ac40010b9a692b22adae18071726c/agentdriver/planning/motion_planning.py#L295), [ego-states](https://github.com/BillChan226/AgentPoison/blob/485d9702295ac40010b9a692b22adae18071726c/agentdriver/planning/motion_planning.py#L327), [mission goal](https://github.com/BillChan226/AgentPoison/blob/485d9702295ac40010b9a692b22adae18071726c/agentdriver/planning/motion_planning.py#L341).
You need to follow the instruction [here](https://github.com/USC-GVL/Agent-Driver) and fine-tune a motion planner based on GPT-3.5 using [OpenAI's API](https://platform.openai.com/docs/guides/fine-tuning) first. As an alternative, we fine-tune a motion planner based on [LLaMA-3](https://huggingface.co/meta-llama/Meta-Llama-3-8B) in [here](https://huggingface.co/Zhaorun/LLaMA-2-Agent-Driver-Motion-Planner), such that the agent inference can be completely offline. Set `use_local_planner` in [here](https://github.com/BillChan226/AgentPoison/blob/485d9702295ac40010b9a692b22adae18071726c/agentdriver/planning/planning_agent.py#L58) to `True` to enable this.
### :memo: ReAct-StrategyQA
First download the corresponding dataset from [here](https://drive.google.com/drive/folders/1WNJlgEZA3El6PNudK_onP7dThMXCY60K?usp=sharing) or the StrategyQA [dataset](https://allenai.org/data/strategyqa). Put the corresponding dataset in `ReAct/database`.
Then put the optimized trigger tokens in [here](https://github.com/BillChan226/AgentPoison/blob/4de6c5ac5d3ea01f748aff85b9e8b844a3138eb3/ReAct/run_strategyqa_gpt3.5.py#L112). Run the following command to infer with GPT backbone:
```bash
python ReAct/run_strategyqa_gpt3.5.py --model dpr --task_type adv
```
and similarly to infer with LLaMA-3-70b backbone (you need to first obtain an API key in [Replicate](https://replicate.com/) to access [LLaMA-3](https://replicate.com/meta/meta-llama-3-70b-instruct)) and put it [here](https://github.com/BillChan226/AgentPoison/blob/4de6c5ac5d3ea01f748aff85b9e8b844a3138eb3/ReAct/run_strategyqa_llama3_api.py#L17).
```bash
python ReAct/run_strategyqa_llama3_api.py --model dpr --task_type adv
```Specifically, set `--task_type` to `adv` to inject querries with trigger and `benign` to get the benign utility under attack. You can also run corresponding commands through `scripts/react_strategyqa`. The results will be saved to a path indicated by `--save_dir`.
#### Evaluation
To evaluate the red-teaming performance for StrategyQA, simply run the following command:
```python
python ReAct/eval.py -p [RESPONSE_PATH]
```where `RESPONSE_PATH` is the path to the response json file.
### :man_health_worker: EHRAgent
First download the corresponding dataset from [here](https://drive.google.com/drive/folders/1WNJlgEZA3El6PNudK_onP7dThMXCY60K?usp=sharing) and put it under `EhrAgent/database`.
Then put the optimized trigger tokens in [here](https://github.com/BillChan226/AgentPoison/blob/b8f9d6bb20de5a9fdad0047b85b2645aa9667785/EhrAgent/ehragent/main.py#L90). Run the following command to infer with GPT/LLaMA3:
```bash
python EhrAgent/ehragent/main.py --backbone gpt --model dpr --algo ap --attack
```You can specify `--backbone` to `llama3` to infer with LLaMA3, and set `--attack` to `False` to get the benign utility under attack. You can also run corresponding commands through `scripts/ehragent`. The results will be saved to a path indicated by `--save_dir`.
#### Evaluation
To evaluate the red-teaming performance for EHRAgent, simply run the following command:
```python
python EhrAgent/ehragent/eval.py -p [RESPONSE_PATH]
```where `RESPONSE_PATH` is the path to the response json file.
Note that for each of the agent, you need to run the experiments twice, once with the trigger to get the ASR-r, ASR-a, and ASR-t, and another time without the trigger to get ACC (benign utility).
## :book: Acknowledgement
Please cite the paper as follows if you use the data or code from AgentPoison:
```
@misc{chen2024agentpoisonredteamingllmagents,
title={AgentPoison: Red-teaming LLM Agents via Poisoning Memory or Knowledge Bases},
author={Zhaorun Chen and Zhen Xiang and Chaowei Xiao and Dawn Song and Bo Li},
year={2024},
eprint={2407.12784},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2407.12784},
}
```## :book: Contact
Please reach out to us if you have any suggestions or need any help in reproducing the results. You can submit an issue or pull request, or send an email to [email protected].## :key: License
This repository is under [MIT License](LICENSE).
Many codes are based on [Lavis](https://github.com/salesforce/LAVIS) with
BSD 3-Clause License [here](LICENSE_Lavis.md).