An open API service indexing awesome lists of open source software.

https://github.com/zjunlp/datamind

Why Do Open-Source LLMs Struggle with Data Analysis? A Systematic Empirical Study
https://github.com/zjunlp/datamind

agent artificial-intelligence data-analysis data-science language-model natural-language-processing

Last synced: 4 months ago
JSON representation

Why Do Open-Source LLMs Struggle with Data Analysis? A Systematic Empirical Study

Awesome Lists containing this project

README

          

DataMind


πŸ“„arXiv β€’
πŸ€—HuggingFace



[![Awesome](https://awesome.re/badge.svg)](https://github.com/zjunlp/DataMind)
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
![](https://img.shields.io/github/last-commit/zjunlp/DataMind?color=green)

## Table of Contents

- πŸ”” [News](#news)
- πŸ“‘ [Todo-List](#todo-list)
- πŸ‘€ [Overview](#overview)
- πŸ”§ [Installation](#installation)
- πŸ’» [Training](#training)
- 🧐 [Evaluation](#evaluation)
- ✍️ [Citation](#citation)

---

## πŸ”” News
- **[2025-09]** We release a new paper: "[Scaling Generalist Data-Analytic Agents](https://arxiv.org/abs/2509.25084)".

- **[2025-06]** We release a new paper: "[Why Do Open-Source LLMs Struggle with Data Analysis? A Systematic Empirical Study](https://arxiv.org/pdf/2506.19794)".

## πŸ“‘ Todo-List
- [ ] RL training code will be released soon.
- [ ] RL and Evaluation Data will be released soon.

## πŸ‘€ Overview

Data-analytic agents are emerging as a key catalyst for automated scientific discovery and for the vision of Innovating AI. Current approaches, however, rely heavily on prompt engineering or multi-agent scaffolds over proprietary models, while open-source models still struggle with diverse-format, large-scale data files and long-horizon, multi-step reasoning that real-world analytics demands. This paper introduces **DataMind**, a scalable data synthesis and agent training recipe designed to construct generalist data-analytic agents. **DataMind** tackles three key challenges in building open-source data-analytic agents, including insufficient data resources, improper training strategy, and unstable code-based multi-turn rollout.

Concretely, **DataMind** applies
- A fine-grained task taxonomy and a recursive easy-to-hard task composition mechanism to increase the diversity and difficulty of synthesized queries;
- A knowledge-augmented trajectory sampling strategy followed by model-based and rule-based filtering;
- A dynamically adjustable training objective combining both SFT and RL losses;
- A memory-frugal and stable code-based multi-turn rollout framework.

Built on **DataMind**, we curate **DataMind-12K**, a high-quality trajectory set spanning diverse domains, task categories, and data file formats for data-analytic tasks. Trained on DataMind-12K, our DataMind-14B achieves state-of-the-art with an average score of 71.16\% on multiple data analysis benchmarks, outperforming the strongest proprietary baselines DeepSeek-V3.1 and GPT-5. Our DataMind-7B also performs best among all open-source models with a score of 68.10\%. We also list some empirical insights gained from our exploratory trials in the analysis experiments, aiming to provide actionable insights about agent training for the community. We will release DataMind-12K and DataMind-7B,14B for the community's future research.

## πŸ”§Installation
#### Manual Environment Configuration

Conda virtual environments offer a light and flexible setup. For different projects, we recommend using separate conda environments for management.

#### Prerequisites

- Anaconda Installation
- GPU support (recommended CUDA version: 12.6)

#### Scaling Generalist Data-Analytic Agents

- SFT training

For SFT training, we use **[LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)** (0.9.4.dev0) framework.
```bash
cd train/SFT/LLaMA-Factory
pip install -e ".[torch,metrics]" --no-build-isolation
```

- RL training

For RL training, we use **[verl](https://github.com/volcengine/verl)** (v0.4.0) framework.
```bash
cd train/RL/verl
USE_MEGATRON=0 bash scripts/install_vllm_sglang_mcore.sh
pip install -e .[vllm]
pip install -e .[sglang]
apt install sqlite3
```

- Eval
```bash
cd eval/Datamind
pip install -r requirements.txt
apt install sqlite3
```

#### Why Do Open-Source LLMs Struggle with Data Analysis? A Systematic Empirical Study
- SFT training

For SFT training, we use **[LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)** (0.9.4.dev0) framework.
```bash
cd train/SFT/LLaMA-Factory
pip install -e ".[torch,metrics]" --no-build-isolation
```

- Eval
```bash
cd eval/DataMind-Qwen2.5
pip install -r requirements.txt
```

## πŸ’» Training

### SFT training
Our model training was completed using the powerful and user-friendly **[LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)** framework (0.9.4.dev0), which provided us with an efficient fine-tuning workflow.

##### 1. Training Data

The training dataset `datamind_12k` in *Scaling Generalist Data-Analytic Agents* is available in huggingface [datamind-12k](https://huggingface.co/datasets/zjunlp/DataMind-12K/tree/main). You can download it and put it in `train/SFT/LLaMA-Factory/data/datamind/datamind_12k.json`.

The training dataset `datamind-da-dataset` in *Why Do Open-Source LLMs Struggle with Data Analysis? A Systematic Empirical Study* is available in `train/SFT/LLaMA-Factory/data/datamind/datamind-da-dataset.json`

##### 2. Training Configuration

We provide our configuration for full-parameter fine-tuning using DeepSpeed ZeRO-3 in yaml file. You can find it in `train/SFT/LLaMA-Factory/examples/train_full/datamind_12k_full_sft.yaml` and `train/SFT/LLaMA-Factory/examples/train_full/datamind_da_dataset_full_sft.yaml`.

##### 3. Launch Training
You can use the following command to start training. Here we take `datamind_12k_full_sft.yaml` as an example. Or you can use the shell script `train/SFT/LLaMA-Factory/train.sh`.
```
CUDA_VISIBLE_DEVICES=0,1,2,3 llamafactory-cli train examples/train_full/datamind_12k_full_sft.yaml
```

### RL training
Our RL training framework is modified from the [verl](https://github.com/volcengine/verl) (v0.4.0) framework, which is a flexible, efficient and production-ready RL training library for large language models (LLMs).

##### 1. Training Data
The training data will be released soon.

##### 2. Training Configuration
The training code will be released soon.

## 🧐 Evaluation
### Scaling Generalist Data-Analytic Agents
### 1. Evaluation Data
The evaluation data will be released soon.
You should unzip the zip files and place them in the corresponding folders.
```
β”œβ”€β”€ model.sh
β”œβ”€β”€ requirements.txt
β”œβ”€β”€ python
β”‚ β”œβ”€β”€ compute_pass3.py
β”‚ β”œβ”€β”€ da-dev-tables
β”‚ β”œβ”€β”€ eval_python.py
β”‚ β”œβ”€β”€ eval.sh
β”‚ β”œβ”€β”€ interpreter.py
β”‚ β”œβ”€β”€ tablebench_csv
β”‚ └── test_file
β”‚ β”œβ”€β”€ daeval_test.parquet
β”‚ └── tablebench_test.parquet
└── sql
β”œβ”€β”€ bird
β”‚ β”œβ”€β”€ bird_dev_csv_results
β”‚ β”œβ”€β”€ dev_sqlite_files
β”‚ β”œβ”€β”€ bird_dev_omni_ddl.json
β”‚ └── test_file
β”‚ └── bird_dev.parquet
β”œβ”€β”€ compute_pass3.py
β”œβ”€β”€ eval_bird.py
β”œβ”€β”€ eval.sh
└── interpreter.py
```

### 2. Evaluation
We use vLLM to launch a local model server. You can modify the `model.sh` to adapt to your own environment and run it to start the model server.
```sh
bash model.sh
```

#### For Python Evaluation
You can modify the eval/python/eval.sh and run it to start Python evaluation. Notice that you should modify the `base_url` and `api_key` for judge model in `eval/python/eval_python.py`.
```sh
PORT=19007
export OPENAI_BASE_URL=http://0.0.0.0:${PORT}/v1
export OPENAI_API_KEY=placeholder_key

python eval_python.py \
--model datamind \
--temperature 0.7 \
--top_p 0.95 \
--bs 5 \
--test_bench dabench \
--test_file test_file/daeval_test.parquet \
--csv_or_db_folder da-dev-tables \
```

#### For SQL Evaluation
You can modify the eval/sql/eval.sh and run it to start SQL evaluation.
```sh
PORT=19008
export OPENAI_BASE_URL=http://0.0.0.0:${PORT}/v1
export OPENAI_API_KEY=placeholder_key

python eval_bird.py \
--model datamind \
--temperature 0.7 \
--top_p 0.95 \
--bs 5 \
--test_bench bird \
--test_file bird/test_file/bird_dev.parquet \
--csv_or_db_folder bird/dev_sqlite_files \
--gold_csv_results_dir bird/bird_dev_csv_results \
--db_schema_data_path bird/bird_dev_omni_ddl.json
```

### Why Do Open-Source LLMs Struggle with Data Analysis? A Systematic Empirical Study
> Note:
>
> - **Ensure** that your working directory is set to the **`eval/DataMind-Analysis`** folder in a virtual environment.
> - If you have more questions, feel free to open an issue with us.
> - If you need to use local model, you need to deploy it according to **(Optional)`local_model.sh`**.

**Step 1: Download the evaluation datasets and our sft models**
The evaluation datasets we used are in [QRData](https://github.com/xxxiaol/QRData) and [DiscoveryBench](https://github.com/allenai/discoverybench). The script expects data to be at `data/QRData/benchmark/data/*.csv` and `data/DiscoveryBench/*.csv`.

You can also download our sft models directly from Hugging Face: [DataMind-Analysis-Qwen2.5-7B](https://huggingface.co/zjunlp/DataMind-Analysis-Qwen2.5-7B) ,[DataMind-Analysis-Qwen2.5-14B](https://huggingface.co/zjunlp/DataMind-Analysis-Qwen2.5-14B).

You can use the following `bash` script to download the dataset:
```bash
bash download_eval_data.sh
```

**Step 2: Prepare the parameter configuration**

Here is the example:
**`config.yaml`**

```yaml
api_key: your_api_key # your API key for the model with API service. No need for open-source models.
data_root: /path/to/your/project/DataMind/eval/data # Root directory for data. (absolute path !!!)
```

**`run_eval.sh`**

```bash
python do_generate.py \
--model_name DataMind-Qwen2.5-7B \ # Model name to use.
--check_model gpt-4o-mini \ # Check model to use.
--output results \ # Output directory path.
--dataset_name QRData \ # Dataset name to use, chosen from QRData, DiscoveryBench.
--max_round 25 \ # Maximum number of steps.
--api_port 8000 \ # API port number, it is necessary if the local model is used.
--bidx 0 \ # Begin index (inclusive), `None` indicates that there is no restriction.
--eidx None \ # End index (exclusive), `None` indicates that there is no restriction.
--temperature 0.0 \ # Temperature for sampling.
--top_p 1 \ # Top p for sampling.
--add_random False \ # Whether to add random files.
```

**(Optional)`local_model.sh`**

```bash
CUDA_VISIBLE_DEVICES=$i python -m vllm.entrypoints.openai.api_server \
--model $MODEL_PATH \ # Local model path.
--served-model-name $MODEL_NAME \ # The model name specified by you.
--tensor-parallel-size $i \ # Set the size of tensor parallel processing.
--port $port # API port number, which is consistent with the `api_port` above.
```

**Step 3: Run the shell script**

**(Optional)** Deploy the local model if you need.

```bash
bash local_model.sh
```

Run the shell script to start the process.

```bash
bash run_eval.sh
```

## πŸŽ‰Contributors


We deeply appreciate the collaborative efforts of everyone involved. We will continue to enhance and maintain this repository over the long term. If you encounter any issues, feel free to submit them to us!

## ✍️ Citation

If you find our work helpful, please use the following citations.

```

@misc{qiao2025scalinggeneralistdataanalyticagents,
title={Scaling Generalist Data-Analytic Agents},
author={Shuofei Qiao and Yanqiu Zhao and Zhisong Qiu and Xiaobin Wang and Jintian Zhang and Zhao Bin and Ningyu Zhang and Yong Jiang and Pengjun Xie and Fei Huang and Huajun Chen},
year={2025},
eprint={2509.25084},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2509.25084},
}

@article{zhu2025open,
title={Why Do Open-Source LLMs Struggle with Data Analysis? A Systematic Empirical Study},
author={Zhu, Yuqi and Zhong, Yi and Zhang, Jintian and Zhang, Ziheng and Qiao, Shuofei and Luo, Yujie and Du, Lun and Zheng, Da and Chen, Huajun and Zhang, Ningyu},
journal={arXiv preprint arXiv:2506.19794},
year={2025}
}
```