https://github.com/qwenlm/polymath
Evaluation Code Repo for Paper "PolyMath: Evaluating Mathematical Reasoning in Multilingual Contexts"
https://github.com/qwenlm/polymath
large-language-models mathematical-reasoning multilingual qwen3
Last synced: 4 months ago
JSON representation
Evaluation Code Repo for Paper "PolyMath: Evaluating Mathematical Reasoning in Multilingual Contexts"
- Host: GitHub
- URL: https://github.com/qwenlm/polymath
- Owner: QwenLM
- Created: 2025-04-25T02:58:32.000Z (5 months ago)
- Default Branch: main
- Last Pushed: 2025-05-22T05:00:13.000Z (5 months ago)
- Last Synced: 2025-05-22T05:33:01.360Z (5 months ago)
- Topics: large-language-models, mathematical-reasoning, multilingual, qwen3
- Language: Python
- Homepage: https://Qwen-PolyMath.github.io
- Size: 2.76 MB
- Stars: 18
- Watchers: 4
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
![]()
PolyMath: Evaluating Mathematical Reasoning in Multilingual Contexts
This is the official repository for the paper **"PolyMath: Evaluating Mathematical Reasoning in Multilingual Contexts"**.
## ๐ Introduction
**PolyMath** is a multilingual mathematical reasoning benchmark covering 18 languages and 4 easy-to-hard difficulty levels, with 9,000 high-quality problem samples. Our benchmark ensures difficulty comprehensiveness, language diversity, and high-quality translation, making it a highly discriminative multilingual mathematical benchmark in the era of reasoning LLMs.
## โจ Features
- ๐ **Broad Difficulty Range:** PolyMath defines and partitions **mathematical difficulty across four levels** using two core dimensions: *Thought Depth* and *Knowledge Breadth*, ranging from K-12 to Olympiad and advanced frontier mathematics, with **125 problems per language at each level**.
![]()
- ๐ **Language Diversity:** Each problem in PolyMath is available in **18 parallel language versions**, encompassing over 75% of the worldโs native speakers and major language families, ensuring diversity across both high-resource and low-resource languages.
![]()
- ๐งโ๐ซ **High-Quality Annotation:** Each problem translation is **calibrated by language experts**, avoiding direct use of LLM-generated outputs and ensuring precise term and logical clarity.
![]()
## ๐ ๏ธ Data Usage
The PolyMath dataset is publicly available and can be accessed in [](https://huggingface.co/datasets/Qwen/PolyMath), with the following format:
```
PolyMath/
โโโ ar/
โ โโโ low.parquet
โ โโโ medium.parquet
โ โโโ high.parquet
| โโโ top.parquet
โโโ bn/
โโโ ...
โโโ zh/
```* Additionally, all prompts used in the inference process are provided in `instruction.py`.
## ๐งช Evaluation
### Environment Preparation
```
conda create -n polymath python=3.10
conda activate polymath
pip install -r requirements.txt
```### Output Process
Given that varying inference engines may generate outputs in different formats, we request that you standardize your results into the specified format:
```
mkdir output
cd output
```1. Take `/{model_name}` as the primary directory tier, and `/{difficulty_level}` as the secondary tier.
2. For each language, generate a `{lang_name}.jsonl` file within `/{difficulty_level}`, ensuring it includes 125 output samples. Each sample should adhere to the following format:
```json
{"idx: 0, ...}
...
{
"idx": 114, ### unique sample id
"question": "ๅ่ฎพๅจๅนณ้ขไธ็ไธไธช็ดง้ $C$ ๆปก่ถณไปฅไธๆกไปถ๏ผๅฏนๆฏไธไธชๆนๅ๏ผ้ฝๅญๅจไธๆก่ฏฅๆนๅไธ็็ด็บฟ $l$๏ผไฝฟๅพ $l \\cap C$ ็็ปดๆฐ่ณๅฐไธบ $\\frac{1}{2}$ใ้ฃไน๏ผ$C$ ็ๆๅฐๅฏ่ฝ็ปดๆฐๆฏๅคๅฐ๏ผ", ### question in corresponding language version
"answer": "$\\frac{5}{4}$", ### ground truth
"thinking_pred": "ๅฏ๏ผ่ฟไธช้ฎ้ข็่ตทๆฅๆ็นๆๆๆง๏ผไธ่ฟ่ฎฉๆๆ ขๆ ขๆณๆณใ้ข็ฎๆฏ่ฏด๏ผๅจๅนณ้ขไธๆไธไธช็ดง้C...", ### Note: Model's thinking content. Note: If it is a non-reasoning model, leave this field blank.
"answer_pred": "้ข็ฎ่ฆๆฑๅจๅนณ้ขไธ็ไธไธช็ดง้ \\( C \\)๏ผๆปก่ถณๅฏนไบๆฏไธไธชๆนๅ๏ผ...", ### Note: Model's answer content.
}
...
{"idx: 124, ...}
```The complete file structure is as follows:
```shell
PolyMath/output
โโโ qwq-32b
โ โโโ low
โ โ โโโ ar.jsonl
โ โ โโโ bn.jsonl
โ โ โโโ ...
โ โโโ medium
โ โ โโโ ar.jsonl
โ โ โโโ bn.jsonl
โ โ โโโ ...
โ โโโ high
โ โ โโโ ar.jsonl
โ โ โโโ bn.jsonl
โ โ โโโ ...
โ โโโ top
โ โโโ ar.jsonl
โ โโโ bn.jsonl
โ โโโ ...
โโโ deepseek-v3
โ โโโ low
โ โ โโโ ar.jsonl
โ โ โโโ bn.jsonl
โ โ โโโ ...
โ โโโ medium
โ โ โโโ ar.jsonl
โ โ โโโ bn.jsonl
โ โ โโโ ...
โ โโโ high
โ โ โโโ ar.jsonl
โ โ โโโ bn.jsonl
โ โ โโโ ...
โ โโโ top
โ โโโ ar.jsonl
โ โโโ bn.jsonl
โ โโโ ...
โโโ ... (other models)
```### Score Computation
The `/eval/run_eval.py` provides evaluation code for **accuracy** and **language consistency**. Please run `run_eval.sh` to iterate through your processed output files.
```
cd ../eval
bash run_eval.sh
````run_eval.sh`
```shell
model_list=(qwq-32b deepseek-v3)
language_list=(en zh ar bn de es fr id it ja ko ms pt ru sw te th vi)
level_list=(low medium high top)for i in ${model_list[*]}; do
for j in ${language_list[*]}; do
for k in ${level_list[*]}; do
python run_eval.py --model $i --language $j --level $k
done
done
done
```You can customize `model_list`, `language_list`, and `level_list`. When it is detected that the evaluations for all levels of a particular model in a specific language are completed, the computation of the benchmark score will be triggered.
**During evaluation, a score file will be automatically generated at `/eval/output/{model_name}/score.json`, and all scores will be saved.**
## ๐ Citation
If you use **PolyMath** in your research or find our work useful, please cite us:
```bibtex
@article{wang2025polymath,
title={PolyMath: Evaluating Mathematical Reasoning in Multilingual Contexts},
author={Yiming Wang and Pei Zhang and Jialong Tang and Haoran Wei and Baosong Yang and Rui Wang and Chenshu Sun and Feitong Sun and Jiran Zhang and Junxuan Wu and Qiqian Cang and Yichang Zhang and Fei Huang and Junyang Lin and Fei Huang and Jingren Zhou},
journal={arXiv preprint arXiv:2504.18428},
year={2025},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.18428},
}
```