https://github.com/metauto-ai/agent-as-a-judge
🤠 Agent-as-a-Judge and DevAI dataset
https://github.com/metauto-ai/agent-as-a-judge
agent-as-a-judge code-generation llm
Last synced: about 1 year ago
JSON representation
🤠 Agent-as-a-Judge and DevAI dataset
- Host: GitHub
- URL: https://github.com/metauto-ai/agent-as-a-judge
- Owner: metauto-ai
- License: mit
- Created: 2024-10-16T01:38:20.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-10-27T05:41:31.000Z (over 1 year ago)
- Last Synced: 2024-10-27T13:52:06.557Z (over 1 year ago)
- Topics: agent-as-a-judge, code-generation, llm
- Language: Python
- Homepage: https://arxiv.org/pdf/2410.10934
- Size: 5.31 MB
- Stars: 153
- Watchers: 1
- Forks: 14
- Open Issues: 5
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- StarryDivineSky - metauto-ai/agent-as-a-judge - as-a-Judge 提供两个主要优势:自动评估: Agent-as-a-Judge 可以在任务执行期间或之后进行评估,与人类专家相比,节省了 97.72% 的时间和 97.64% 的成本。提供奖励信号: 它提供持续的、循序渐进的反馈,可用作进一步代理培训和改进的奖励信号。作为概念验证,我们将 Agent-as-a-Judge 应用于使用 DevAI 的代码生成任务,DevAI 是一个由 55 个真实的 AI 开发任务和 365 个分层用户需求组成的基准测试。结果表明,代理即法官 (Agent-as-a-Judge) 明显优于传统的评估方法,为代理系统中的可扩展自我提升提供可靠的奖励信号。 (A01_文本生成_文本对话 / 大语言对话模型及数据)
- awesome-production-llm - DevAI (agent-as-a-judge) - ai/agent-as-a-judge.svg?style=social) DevAI, a benchmark consisting of 55 realistic AI development tasks with 365 hierarchical user requirements. (LLM Agent Benchmarks)
README
> [!NOTE]
> Current evaluation techniques are often inadequate for advanced **agentic systems** due to their focus on final outcomes and labor-intensive manual reviews. To overcome this limitation, we introduce the **Agent-as-a-Judge** framework.
>
## 🤠 Features
Agent-as-a-Judge offers two key advantages:
- **Automated Evaluation**: Agent-as-a-Judge can evaluate tasks during or after execution, saving 97.72% of time and 97.64% of costs compared to human experts.
- **Provide Reward Signals**: It provides continuous, step-by-step feedback that can be used as reward signals for further agentic training and improvement.
## 🎮 Quick Start
### 1. install
```python
git clone https://github.com/metauto-ai/agent-as-a-judge.git
cd agent-as-a-judge/
conda create -n aaaj python=3.11
conda activate aaaj
pip install poetry
poetry install
```
### 2. set LLM&API
Before running, rename `.env.sample` to `.env` and fill in the **required APIs and Settings** in the main repo folder to support LLM calling. The `LiteLLM` tool supports various LLMs.
### 3. run
> [!TIP]
> See more comprehensive [usage scripts](scripts/README.md).
>
#### Usage A: **Ask Anything** for Any Workspace:
```python
PYTHONPATH=. python scripts/run_ask.py \
--workspace $(pwd)/benchmark/workspaces/OpenHands/39_Drug_Response_Prediction_SVM_GDSC_ML \
--question "What does this workspace contain?"
```
You can find an [example](assets/ask_sample.md) to see how **Ask Anything** works.
#### Usage B: **Agent-as-a-Judge** for **DevAI**
```python
PYTHONPATH=. python scripts/run_aaaj.py \
--developer_agent "OpenHands" \
--setting "black_box" \
--planning "efficient (no planning)" \
--benchmark_dir $(pwd)/benchmark
```
💡 There is an [example](assets/aaaj_sample.md) that shows the process of how **Agent-as-a-Judge** collects evidence for judging.
## 🤗 DevAI Dataset
> [!IMPORTANT]
> As a **proof-of-concept**, we applied **Agent-as-a-Judge** to code generation tasks using **DevAI**, a benchmark consisting of 55 realistic AI development tasks with 365 hierarchical user requirements. The results demonstrate that **Agent-as-a-Judge** significantly outperforms traditional evaluation methods, delivering reliable reward signals for scalable self-improvement in agentic systems.
>
> Check out the dataset on [Hugging Face 🤗](https://huggingface.co/DEVAI-benchmark).
> See how to use this dataset in the [guidelines](benchmark/devai/README.md).
## Reference
Feel free to cite if you find the Agent-as-a-Judge concept useful for your work:
```
@article{zhuge2024agent,
title={Agent-as-a-Judge: Evaluate Agents with Agents},
author={Zhuge, Mingchen and Zhao, Changsheng and Ashley, Dylan and Wang, Wenyi and Khizbullin, Dmitrii and Xiong, Yunyang and Liu, Zechun and Chang, Ernie and Krishnamoorthi, Raghuraman and Tian, Yuandong and Shi, Yangyang and Chandra, Vikas and Schmidhuber, J{\"u}rgen},
journal={arXiv preprint arXiv:2410.10934},
year={2024}
}
```