{"id":15064335,"url":"https://github.com/bigcode-project/bigcodebench","last_synced_at":"2025-05-15T04:06:38.085Z","repository":{"id":238594524,"uuid":"793526430","full_name":"bigcode-project/bigcodebench","owner":"bigcode-project","description":"[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI","archived":false,"fork":false,"pushed_at":"2025-04-11T15:09:08.000Z","size":6891,"stargazers_count":348,"open_issues_count":7,"forks_count":41,"subscribers_count":7,"default_branch":"main","last_synced_at":"2025-04-29T09:22:16.513Z","etag":null,"topics":["agent","agents","benchmark","chatgpt","claude-3","code-generation","deepseek","function-calling","gemini","gpt-4","instruction-following","large-language-models","llm","program-synthesis","tool-use"],"latest_commit_sha":null,"homepage":"https://bigcode-bench.github.io/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/bigcode-project.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":"CITATION.cff","codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2024-04-29T11:46:10.000Z","updated_at":"2025-04-28T10:13:01.000Z","dependencies_parsed_at":"2024-05-15T13:13:56.794Z","dependency_job_id":"72e2972f-79fe-4262-bec9-2e573a10c759","html_url":"https://github.com/bigcode-project/bigcodebench","commit_stats":{"total_commits":1091,"total_committers":21,"mean_commits":51.95238095238095,"dds":0.6388634280476627,"last_synced_commit":"3fbcbb66c729134adaa38ce669ee37aada88b46d"},"previous_names":["bigcode-project/wild-code","bigcode-project/bigcodebench"],"tags_count":51,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bigcode-project%2Fbigcodebench","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bigcode-project%2Fbigcodebench/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bigcode-project%2Fbigcodebench/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bigcode-project%2Fbigcodebench/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/bigcode-project","download_url":"https://codeload.github.com/bigcode-project/bigcodebench/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254270646,"owners_count":22042859,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agent","agents","benchmark","chatgpt","claude-3","code-generation","deepseek","function-calling","gemini","gpt-4","instruction-following","large-language-models","llm","program-synthesis","tool-use"],"created_at":"2024-09-25T00:15:34.055Z","updated_at":"2025-05-15T04:06:33.065Z","avatar_url":"https://github.com/bigcode-project.png","language":"Python","readme":"# BigCodeBench\n\u003ccenter\u003e\n\u003cimg src=\"https://github.com/bigcode-bench/bigcode-bench.github.io/blob/main/asset/bigcodebench_banner.svg?raw=true\" alt=\"BigCodeBench\"\u003e\n\u003c/center\u003e\n\n\u003cp align=\"center\"\u003e\n    \u003ca href=\"https://huggingface.co/spaces/bigcode/bigcodebench-leaderboard\"\u003e\u003cimg src=\"https://img.shields.io/badge/🤗\u0026nbsp\u0026nbsp%F0%9F%8F%86-leaderboard-%23ff8811\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://huggingface.co/collections/bigcode/bigcodebench-666ed21a5039c618e608ab06\"\u003e\u003cimg src=\"https://img.shields.io/badge/🤗-collection-pink\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://bigcode-bench.github.io/\"\u003e\u003cimg src=\"https://img.shields.io/badge/%F0%9F%8F%86-website-8A2BE2\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://arxiv.org/abs/2406.15877\"\u003e\u003cimg src=\"https://img.shields.io/badge/arXiv-2406.15877-b31b1b.svg\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://pypi.org/project/bigcodebench/\"\u003e\u003cimg src=\"https://img.shields.io/pypi/v/bigcodebench?color=g\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://pepy.tech/project/bigcodebench\"\u003e\u003cimg src=\"https://static.pepy.tech/badge/bigcodebench\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://github.com/bigcodebench/bigcodebench/blob/master/LICENSE\"\u003e\u003cimg src=\"https://img.shields.io/pypi/l/bigcodebench\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://hub.docker.com/r/bigcodebench/bigcodebench-evaluate\" title=\"Docker-Eval\"\u003e\u003cimg src=\"https://img.shields.io/docker/image-size/bigcodebench/bigcodebench-evaluate\"\u003e\u003c/a\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n    \u003ca href=\"#-impact\"\u003e💥 Impact\u003c/a\u003e •\n    \u003ca href=\"#-news\"\u003e📰 News\u003c/a\u003e •\n    \u003ca href=\"#-quick-start\"\u003e🔥 Quick Start\u003c/a\u003e •\n    \u003ca href=\"#-remote-evaluation\"\u003e🚀 Remote Evaluation\u003c/a\u003e •\n    \u003ca href=\"#-llm-generated-code\"\u003e💻 LLM-generated Code\u003c/a\u003e •\n    \u003ca href=\"#-advanced-usage\"\u003e🧑 Advanced Usage\u003c/a\u003e •\n    \u003ca href=\"#-result-submission\"\u003e📰 Result Submission\u003c/a\u003e •\n    \u003ca href=\"#-citation\"\u003e📜 Citation\u003c/a\u003e\n\u003c/p\u003e\n\n\u003cdiv align=\"center\"\u003e\n    \u003ch2\u003e🎉 Check out our latest work!\u003cbr\u003e\n    \u003ca href=\"https://swe-arena.com\"\u003e🌟 SWE Arena 🌟\u003c/a\u003e\u003cbr\u003e\n    \u003cstrong\u003e🚀 Open Evaluation Platform on AI for Software Engineering 🚀\u003cbr\u003e\n    ✨ 100% free to use the latest frontier models! ✨\u003c/strong\u003e\u003c/h2\u003e\n\u003c/div\u003e\n\n## 💥 Impact\nBigCodeBench has been trusted by many LLM teams including:\n- Zhipu AI\n- Alibaba Qwen\n- DeepSeek\n- Amazon AWS AI\n- Snowflake AI Research\n- ServiceNow Research\n- Meta AI\n- Cohere AI\n- Sakana AI\n- Allen Institute for Artificial Intelligence (AI2)\n\n## 📰 News\n- **[2025-01-22]** We are releasing `bigcodebench==v0.2.2.dev2`, with 163 models evaluated!\n- **[2024-10-06]** We are releasing `bigcodebench==v0.2.0`!\n- **[2024-10-05]** We create a public code execution API on the [Hugging Face space](https://huggingface.co/spaces/bigcode/bigcodebench-evaluator).\n- **[2024-10-01]** We have evaluated 139 models on BigCodeBench-Hard so far. Take a look at the [leaderboard](https://huggingface.co/spaces/bigcode/bigcodebench-leaderboard)!\n- **[2024-08-19]** To make the evaluation fully reproducible, we add a real-time code execution session to the leaderboard. It can be viewed [here](https://huggingface.co/spaces/bigcode/bigcodebench-leaderboard).\n- **[2024-08-02]** We release `bigcodebench==v0.1.9`.\n\n\u003cdetails\u003e\u003csummary\u003eMore News \u003ci\u003e:: click to expand ::\u003c/i\u003e\u003c/summary\u003e\n\u003cdiv\u003e\n\n- **[2024-07-18]** We announce a subset of BigCodeBench, BigCodeBench-Hard, which includes 148 tasks that are more aligned with the real-world programming tasks. The details are available [in this blog post](https://huggingface.co/blog/terryyz/bigcodebench-hard). The dataset is available [here](https://huggingface.co/datasets/bigcode/bigcodebench-hard). The new release is `bigcodebench==v0.1.8`.\n- **[2024-06-28]** We release `bigcodebench==v0.1.7`.\n- **[2024-06-27]** We release `bigcodebench==v0.1.6`.\n- **[2024-06-19]** We start the Hugging Face BigCodeBench Leaderboard! The leaderboard is available [here](https://huggingface.co/spaces/bigcode/bigcodebench-leaderboard).\n- **[2024-06-18]** We release BigCodeBench, a new benchmark for code generation with 1140 software-engineering-oriented programming tasks. Preprint is available [here](https://arxiv.org/abs/2406.15877). PyPI package is available [here](https://pypi.org/project/bigcodebench/) with the version `0.1.5`.\n\n\u003c/div\u003e\n\u003c/details\u003e\n\n## 🌸 About\n\n### BigCodeBench\n\nBigCodeBench is an **_easy-to-use_** benchmark for solving **_practical_** and **_challenging_** tasks via code. It aims to evaluate the true programming capabilities of large language models (LLMs) in a more realistic setting. The benchmark is designed for HumanEval-like function-level code generation tasks, but with much more complex instructions and diverse function calls.\n\nThere are two splits in BigCodeBench:\n- `Complete`: Thes split is designed for code completion based on the comprehensive docstrings.\n- `Instruct`: The split works for the instruction-tuned and chat models only, where the models are asked to generate a code snippet based on the natural language instructions. The instructions only contain necessary information, and require more complex reasoning.\n\n### Why BigCodeBench?\n\nBigCodeBench focuses on task automation via code generation with *diverse function calls* and *complex instructions*, with:\n\n* ✨ **Precise evaluation \u0026 ranking**: See [our leaderboard](https://huggingface.co/spaces/bigcode/bigcodebench-leaderboard) for latest LLM rankings before \u0026 after rigorous evaluation.\n* ✨ **Pre-generated samples**: BigCodeBench accelerates code intelligence research by open-sourcing [LLM-generated samples](#-LLM-generated-code) for various models -- no need to re-run the expensive benchmarks!\n\n## 🔥 Quick Start\n\nTo get started, please first set up the environment:\n\n```bash\n# By default, you will use the remote evaluation API to execute the output samples.\npip install bigcodebench --upgrade\n\n# You are suggested to use `flash-attn` for generating code samples.\npip install packaging ninja\npip install flash-attn --no-build-isolation\n# Note: if you have installation problem, consider using pre-built\n# wheels from https://github.com/Dao-AILab/flash-attention/releases\n```\n\n\u003cdetails\u003e\u003csummary\u003e⏬ Install nightly version \u003ci\u003e:: click to expand ::\u003c/i\u003e\u003c/summary\u003e\n\u003cdiv\u003e\n\n```bash\n# Install to use bigcodebench.generate\npip install \"git+https://github.com/bigcode-project/bigcodebench.git\" --upgrade\n```\n\n\u003c/div\u003e\n\u003c/details\u003e\n\n\n## 🚀 Remote Evaluation\n\nWe use the greedy decoding as an example to show how to evaluate the generated code samples via remote API.\n\u003e [!Warning]\n\u003e\n\u003e To ease the generation, we use batch inference by default. However, the batch inference results could vary from *batch sizes to batch sizes* and *versions to versions*, at least for the vLLM backend. If you want to get more deterministic results for greedy decoding, please set `--bs` to `1`. \n\n\u003e [!Note]\n\u003e\n\u003e `gradio` backend on `BigCodeBench-Full` typically takes 6-7 minutes, and on `BigCodeBench-Hard` typically takes 4-5 minutes.\n\u003e `e2b` backend with default machine on `BigCodeBench-Full` typically takes 25-30 minutes, and on `BigCodeBench-Hard` typically takes 15-20 minutes.\n\n```bash\nbigcodebench.evaluate \\\n  --model meta-llama/Meta-Llama-3.1-8B-Instruct \\\n  --execution [e2b|gradio|local] \\\n  --split [complete|instruct] \\\n  --subset [full|hard] \\\n  --backend [vllm|openai|anthropic|google|mistral|hf|hf-inference]\n```\n\n- All the resulted files will be stored in a folder named `bcb_results`.\n- The generated code samples will be stored in a file named `[model_name]--bigcodebench-[instruct|complete]--[backend]-[temp]-[n_samples]-sanitized_calibrated.jsonl`.\n- The evaluation results will be stored in a file named `[model_name]--bigcodebench-[instruct|complete]--[backend]-[temp]-[n_samples]-sanitized_calibrated_eval_results.json`.\n- The pass@k results will be stored in a file named `[model_name]--bigcodebench-[instruct|complete]--[backend]-[temp]-[n_samples]-sanitized_calibrated_pass_at_k.json`.\n\n\u003e [!Note]\n\u003e\n\u003e The `gradio` backend is hosted on the [Hugging Face space](https://huggingface.co/spaces/bigcode/bigcodebench-evaluator) by default.\n\u003e The default space can be sometimes slow, so we recommend you to use the `gradio` backend with a cloned [bigcodebench-evaluator](https://huggingface.co/spaces/bigcode/bigcodebench-evaluator) endpoint for faster evaluation.\n\u003e Otherwise, you can also use the `e2b` sandbox for evaluation, which is also pretty slow on the default machine.\n\n\u003e [!Note]\n\u003e\n\u003e BigCodeBench uses different prompts for base and chat models.\n\u003e By default it is detected by `tokenizer.chat_template` when using `hf`/`vllm` as backend.\n\u003e For other backends, only chat mode is allowed.\n\u003e\n\u003e Therefore, if your base models come with a `tokenizer.chat_template`,\n\u003e please add `--direct_completion` to avoid being evaluated\n\u003e in a chat mode.\n\nTo use E2B, you need to set up an account and get an API key from [E2B](https://e2b.dev/).\n\n```bash\nexport E2B_API_KEY=\u003cyour_e2b_api_key\u003e\n```\n\nAccess OpenAI APIs from [OpenAI Console](https://platform.openai.com/)\n```bash\nexport OPENAI_API_KEY=\u003cyour_openai_api_key\u003e\n```\n\nAccess Anthropic APIs from [Anthropic Console](https://console.anthropic.com/)\n```bash\nexport ANTHROPIC_API_KEY=\u003cyour_anthropic_api_key\u003e\n```\n\nAccess Mistral APIs from [Mistral Console](https://console.mistral.ai/)\n```bash\nexport MISTRAL_API_KEY=\u003cyour_mistral_api_key\u003e\n```\n\nAccess Gemini APIs from [Google AI Studio](https://aistudio.google.com/)\n```bash\nexport GOOGLE_API_KEY=\u003cyour_google_api_key\u003e\n```\n\nAccess the [Hugging Face Serverless Inference API](https://huggingface.co/docs/api-inference/en/index)\n```bash\nexport HF_INFERENCE_API_KEY=\u003cyour_hf_api_key\u003e\n```\n\nPlease make sure your HF access token has the `Make calls to inference providers` permission.\n\n## 💻 LLM-generated Code\n\nWe share pre-generated code samples from LLMs we have [evaluated](https://huggingface.co/spaces/bigcode/bigcodebench-leaderboard) on the full set:\n*  See the attachment of our [v0.2.4](https://github.com/bigcode-project/bigcodebench/releases/tag/v0.2.4). We include `sanitized_samples_calibrated.zip` for your convenience.\n\n## 🧑 Advanced Usage\n\nPlease refer to the [ADVANCED USAGE](https://github.com/bigcode-project/bigcodebench/blob/main/ADVANCED_USAGE.md) for more details.\n\n## 📰 Result Submission\n\nPlease email both the generated code samples and the execution results to [terry.zhuo@monash.edu](mailto:terry.zhuo@monash.edu) if you would like to contribute your model to the leaderboard. Note that the file names should be in the format of `[model_name]--[revision]--[bigcodebench|bigcodebench-hard]-[instruct|complete]--[backend]-[temp]-[n_samples]-sanitized_calibrated.jsonl` and `[model_name]--[revision]--[bigcodebench|bigcodebench-hard]-[instruct|complete]--[backend]-[temp]-[n_samples]-sanitized_calibrated_eval_results.json`. You can [file an issue](https://github.com/bigcode-project/bigcodebench/issues/new/choose) to remind us if we do not respond to your email within 3 days.\n\n## 📜 Citation\n\n```bibtex\n@article{zhuo2024bigcodebench,\n  title={BigCodeBench: Benchmarking Code Generation with Diverse Function Calls and Complex Instructions},\n  author={Zhuo, Terry Yue and Vu, Minh Chien and Chim, Jenny and Hu, Han and Yu, Wenhao and Widyasari, Ratnadira and Yusuf, Imam Nur Bani and Zhan, Haolan and He, Junda and Paul, Indraneil and others},\n  journal={arXiv preprint arXiv:2406.15877},\n  year={2024}\n}\n```\n\n## 🙏 Acknowledgement\n\n- [EvalPlus](https://github.com/evalplus/evalplus)\n","funding_links":[],"categories":["Benchmarks \u0026 Evaluation","NLP","Building","📝 DeepReview Agent","Code Generation and Software Engineering"],"sub_categories":["Code Benchmarks","Benchmarks","Software Code Review","BigCodeBench"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbigcode-project%2Fbigcodebench","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fbigcode-project%2Fbigcodebench","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbigcode-project%2Fbigcodebench/lists"}