{"id":17665479,"url":"https://github.com/metauto-ai/agent-as-a-judge","last_synced_at":"2025-03-11T15:32:43.938Z","repository":{"id":258086848,"uuid":"873330843","full_name":"metauto-ai/agent-as-a-judge","owner":"metauto-ai","description":"🤠 Agent-as-a-Judge and DevAI dataset","archived":false,"fork":false,"pushed_at":"2024-10-27T05:41:31.000Z","size":5563,"stargazers_count":153,"open_issues_count":5,"forks_count":14,"subscribers_count":1,"default_branch":"main","last_synced_at":"2024-10-27T13:52:06.557Z","etag":null,"topics":["agent-as-a-judge","code-generation","llm"],"latest_commit_sha":null,"homepage":"https://arxiv.org/pdf/2410.10934","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/metauto-ai.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-10-16T01:38:20.000Z","updated_at":"2024-10-27T05:37:31.000Z","dependencies_parsed_at":"2024-10-17T16:08:52.633Z","dependency_job_id":"1f34de98-99b7-43fe-85a6-ece61e27f648","html_url":"https://github.com/metauto-ai/agent-as-a-judge","commit_stats":null,"previous_names":["metauto-ai/agent-as-a-judge"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/metauto-ai%2Fagent-as-a-judge","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/metauto-ai%2Fagent-as-a-judge/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/metauto-ai%2Fagent-as-a-judge/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/metauto-ai%2Fagent-as-a-judge/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/metauto-ai","download_url":"https://codeload.github.com/metauto-ai/agent-as-a-judge/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":243059828,"owners_count":20229644,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agent-as-a-judge","code-generation","llm"],"created_at":"2024-10-23T21:01:29.067Z","updated_at":"2025-03-11T15:32:43.901Z","avatar_url":"https://github.com/metauto-ai.png","language":"Python","readme":"\u003cdiv align=\"center\"\u003e\n    \u003ch1 align=\"center\"\u003eAgents Evaluate Agents\u003c/h1\u003e\n    \u003cimg src=\"assets/devai_logo.png\" alt=\"DevAI Logo\" width=\"150\" height=\"150\"\u003e\n    \u003cp align=\"center\"\u003e\n\u003c!--         \u003ca href=\"https://devai.tech\"\u003e\u003cb\u003eProject\u003c/b\u003e\u003c/a\u003e |  --\u003e\n         \u003ca href=\"https://huggingface.co/DEVAI-benchmark\"\u003e\u003cb\u003e🤗 Dataset\u003c/b\u003e\u003c/a\u003e | \n        \u003ca href=\"https://arxiv.org/pdf/2410.10934\"\u003e\u003cb\u003e📑 Paper\u003c/b\u003e\u003c/a\u003e \n    \u003c/p\u003e\n\u003c/div\u003e\n\n\u003e [!NOTE]\n\u003e Current evaluation techniques are often inadequate for advanced **agentic systems** due to their focus on final outcomes and labor-intensive manual reviews. To overcome this limitation, we introduce the **Agent-as-a-Judge** framework. \n\u003e\n\n## 🤠 Features\n\nAgent-as-a-Judge offers two key advantages:\n\n- **Automated Evaluation**: Agent-as-a-Judge can evaluate tasks during or after execution, saving 97.72% of time and 97.64% of costs compared to human experts.\n- **Provide Reward Signals**: It provides continuous, step-by-step feedback that can be used as reward signals for further agentic training and improvement.\n\n\u003cdiv align=\"center\"\u003e\n    \u003cimg src=\"assets/demo.gif\" alt=\"Demo GIF\" style=\"width: 100%; max-width: 650px;\"\u003e\n\u003c/div\u003e\n\u003cdiv align=\"center\"\u003e\n    \u003cimg src=\"assets/judge_first.png\" alt=\"AaaJ\" style=\"width: 95%; max-width: 650px;\"\u003e\n\u003c/div\u003e\n\n\n\n## 🎮 Quick Start \n\n### 1. install\n\n```python\ngit clone https://github.com/metauto-ai/agent-as-a-judge.git\ncd agent-as-a-judge/\nconda create -n aaaj python=3.11\nconda activate aaaj\npip install poetry\npoetry install\n```\n\n### 2. set LLM\u0026API\n\nBefore running, rename `.env.sample` to `.env` and fill in the **required APIs and Settings** in the main repo folder to support LLM calling. The `LiteLLM` tool supports various LLMs.\n\n### 3. run \n\n\u003e [!TIP]\n\u003e See more comprehensive [usage scripts](scripts/README.md).\n\u003e\n\n\n#### Usage A: **Ask Anything** for Any Workspace:\n\n```python \n\nPYTHONPATH=. python scripts/run_ask.py \\\n  --workspace $(pwd)/benchmark/workspaces/OpenHands/39_Drug_Response_Prediction_SVM_GDSC_ML \\\n  --question \"What does this workspace contain?\"\n```\n\nYou can find an [example](assets/ask_sample.md) to see how **Ask Anything** works.\n\n\n#### Usage B: **Agent-as-a-Judge** for **DevAI**\n\n\n```python\n\nPYTHONPATH=. python scripts/run_aaaj.py \\\n  --developer_agent \"OpenHands\" \\\n  --setting \"black_box\" \\\n  --planning \"efficient (no planning)\" \\\n  --benchmark_dir $(pwd)/benchmark\n```\n\n💡 There is an [example](assets/aaaj_sample.md) that shows the process of how **Agent-as-a-Judge** collects evidence for judging.\n\n\n\n## 🤗 DevAI Dataset \n\n\n\n\u003cdiv align=\"center\"\u003e\n    \u003cimg src=\"assets/dataset.png\" alt=\"Dataset\" style=\"width: 100%; max-width: 600px;\"\u003e\n\u003c/div\u003e\n\n\u003e [!IMPORTANT]\n\u003e As a **proof-of-concept**, we applied **Agent-as-a-Judge** to code generation tasks using **DevAI**, a benchmark consisting of 55 realistic AI development tasks with 365 hierarchical user requirements. The results demonstrate that **Agent-as-a-Judge** significantly outperforms traditional evaluation methods, delivering reliable reward signals for scalable self-improvement in agentic systems.\n\u003e \n\u003e Check out the dataset on [Hugging Face 🤗](https://huggingface.co/DEVAI-benchmark).\n\u003e See how to use this dataset in the [guidelines](benchmark/devai/README.md).\n\n\n\u003c!-- \u003cdiv align=\"center\"\u003e\n    \u003cimg src=\"assets/sample.jpeg\" alt=\"Sample\" style=\"width: 100%; max-width: 600px;\"\u003e\n\u003c/div\u003e --\u003e\n\n## Reference\n\nFeel free to cite if you find the Agent-as-a-Judge concept useful for your work:\n\n```\n@article{zhuge2024agent,\n  title={Agent-as-a-Judge: Evaluate Agents with Agents},\n  author={Zhuge, Mingchen and Zhao, Changsheng and Ashley, Dylan and Wang, Wenyi and Khizbullin, Dmitrii and Xiong, Yunyang and Liu, Zechun and Chang, Ernie and Krishnamoorthi, Raghuraman and Tian, Yuandong and Shi, Yangyang and Chandra, Vikas and Schmidhuber, J{\\\"u}rgen},\n  journal={arXiv preprint arXiv:2410.10934},\n  year={2024}\n}\n```\n\n\n","funding_links":[],"categories":["A01_文本生成_文本对话","Python","LLM Agent Benchmarks"],"sub_categories":["大语言对话模型及数据"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmetauto-ai%2Fagent-as-a-judge","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmetauto-ai%2Fagent-as-a-judge","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmetauto-ai%2Fagent-as-a-judge/lists"}