Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/open-compass/cibench

Official Repo of "CIBench: Evaluation of LLMs as Code Interpreter "
https://github.com/open-compass/cibench

Last synced: 4 days ago
JSON representation

Official Repo of "CIBench: Evaluation of LLMs as Code Interpreter "

Awesome Lists containing this project

README

        

# CIBench: Evaluating Your LLMs with a Code Interpreter Plugin

[![license](https://img.shields.io/github/license/InternLM/opencompass.svg)](./LICENSE)

## ✨ Introduction

This is an evaluation harness for the benchmark described in CIBench: Evaluating Your LLMs with a Code Interpreter Plugin.

[[Paper](https://www.arxiv.org/abs/2407.10499)]
[[Project Page](https://open-compass.github.io/CIBench/)]
[[LeaderBoard](https://open-compass.github.io/CIBench/leaderboard.html)]

> While LLM-Based agents, which use external tools to solve complex problems, have made significant progress, benchmarking their ability is challenging, thereby hindering a clear understanding of their limitations. In this paper, we propose an interactive evaluation framework, named CIBench, to comprehensively assess LLMs' ability to utilize code interpreters for data science tasks. Our evaluation framework includes an evaluation dataset and two evaluation modes. The evaluation dataset is constructed using an LLM-human cooperative approach and simulates an authentic workflow by leveraging consecutive and interactive IPython sessions. The two evaluation modes assess LLMs' ability with and without human assistance. We conduct extensive experiments to analyze the ability of 24 LLMs on CIBench and provide valuable insights for future LLMs in code interpreter utilization.


## 🛠️ Preparations
CIBench is evaluated based on [OpenCompass](https://github.com/open-compass/opencompass). Please first install opencompass.

```bash
conda create --name opencompass python=3.10 pytorch torchvision pytorch-cuda -c nvidia -c pytorch -y
conda activate opencompass
git clone https://github.com/open-compass/opencompass opencompass
cd opencompass
pip install -e .
pip install requirements/agent.txt
```

Then,

```bash
cd ..
git clone https://github.com/open-compass/CIBench.git
cd CIBench
```

move the *cibench_eval* to the *opencompass/config*

### 💾 Test Data

You can download the CIBench from [here](https://github.com/open-compass/opencompass/releases/download/0.2.4.rc1/cibench_dataset.zip).

Then, unzip the dataset and place the dataset in *OpenCompass/data*. The data path should be like *OpenCompass/data/cibench_dataset/cibench_{generation or template}*.

Finally, using the following scripts to download the nessceary data.

```bash
cd OpenCompass/data/cibench_dataset
sh collect_datasources.sh
```

### 🤗 HuggingFace Models

1. Download the huggingface model to your local path.

2. Run the model with the following scripts in the opencompass dir.
```bash
python run.py config/cibench_eval/eval_cibench_hf.py
```
Note that the currently accelerator config (-a lmdeploy) doesnot support CodeAgent model. If you want to use lmdeploy to acclerate the evaluation, please refer to [lmdeploy_internlm2_chat_7b](https://github.com/open-compass/opencompass/blob/main/configs/models/hf_internlm/lmdeploy_internlm2_chat_7b.py) to write the model config by yourself.

### 💫 Final Results
Once you finish all tested samples, you can check the results in *outputs/cibench*.

Note that the output images will be saved in *output_images*.

## 📊 Benchmark Results

More detailed and comprehensive benchmark results can refer to 🏆 [CIBench official leaderboard](https://open-compass.github.io/CIBench/leaderboard.html) !


## ❤️ Acknowledgements

CIBench is built with [Lagent](https://github.com/InternLM/lagent) and [OpenCompass](https://github.com/open-compass/opencompass). Thanks for their awesome work!

## 💳 License

This project is released under the Apache 2.0 [license](./LICENSE).