https://github.com/opendatalab/charm
[ACL 2024 Main Conference] Chinese commonsense benchmark for LLMs
https://github.com/opendatalab/charm
Last synced: 3 months ago
JSON representation
[ACL 2024 Main Conference] Chinese commonsense benchmark for LLMs
- Host: GitHub
- URL: https://github.com/opendatalab/charm
- Owner: opendatalab
- License: apache-2.0
- Created: 2024-03-20T06:20:18.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2024-07-27T01:54:23.000Z (over 1 year ago)
- Last Synced: 2025-04-06T17:37:36.103Z (9 months ago)
- Language: Python
- Homepage: https://opendatalab.github.io/CHARM/
- Size: 4.99 MB
- Stars: 32
- Watchers: 3
- Forks: 2
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# CHARM⨠Benchmarking Chinese Commonsense Reasoning of LLMs: From Chinese-Specifics to Reasoning-Memorization Correlations
[](https://arxiv.org/abs/2403.14112)
[](./LICENSE)
π[Paper](https://arxiv.org/abs/2403.14112)
π°[Project Page](https://opendatalab.github.io/CHARM/)
π[Leaderboard](https://opendatalab.github.io/CHARM/leaderboard.html)
β¨[Findings](https://opendatalab.github.io/CHARM/findings.html)
## Construction of CHARM
## Comparison of commonsense reasoning benchmarks
Benchmarks
CN-Lang
CSR
CN-specifics
Dual-Domain
Rea-Mem
Most benchmarks in davis2023benchmarks
β
β
β
β
β
XNLI, XCOPA,XStoryCloze
β
β
β
β
β
LogiQA, CLUE, CMMLU
β
β
β
β
β
CORECODE
β
β
β
β
β
CHARM (ours)
β
β
β
β
β
"CN-Lang" indicates the benchmark is presented in Chinese language. "CSR" means the benchmark is designed to focus on CommonSense Reasoning. "CN-specific" indicates the benchmark includes elements that are unique to Chinese culture, language, regional characteristics, history, etc. "Dual-Domain" indicates the benchmark encompasses both Chinese-specific and global domain tasks, with questions presented in the similar style and format. "Rea-Mem" indicates the benchmark includes closely-interconnected reasoning and memorization tasks.
## π What's New
- **[2024.7.26]** All inference and evaluation of CHARM are supported by [Opencompass](https://github.com/open-compass/opencompass).π₯π₯π₯
- **[2024.6.06]** Leaderboard updated! LLaMA-3, GPT-4o, Gemini-1.5, Yi1.5, Qwen1.5, etc. are evaluated.
- **[2024.5.24]** CHARM has been open-sourced !!! π₯π₯π₯
- **[2024.5.15]** CHARM has been accepted to the main conference of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) !!! π₯π₯π₯
- **[2024.3.21]** Paper available on [ArXiv](https://arxiv.org/abs/2403.14112).
## π οΈ Inference and Evaluation on Opencompass
Below are the steps for quickly downloading CHARM and using OpenCompass for evaluation.
### 1. OpenCompass Environment Setup
Refer to the installation steps for [OpenCompass](https://github.com/open-compass/OpenCompass/?tab=readme-ov-file#%EF%B8%8F-installation).
### 2. Download CHARM
```bash
git clone https://github.com/opendatalab/CHARM ${path_to_CHARM_repo}
cd ${path_to_opencompass}
mkdir data
ln -snf ${path_to_CHARM_repo}/data/CHARM ./data/CHARM
```
### 3. Run Inference and Evaluation
```bash
cd ${path_to_opencompass}
# modify config file `configs/eval_charm_rea.py`: uncomment or add models you want to evaluate
python run.py configs/eval_charm_rea.py -r --dump-eval-details
# modify config file `configs/eval_charm_mem.py`: uncomment or add models you want to evaluate
python run.py configs/eval_charm_mem.py -r --dump-eval-details
```
The inference and evaluation results would be in `${path_to_opencompass}/outputs`, like this:
```bash
outputs
βββ CHARM_mem
β βββ chat
β βββ 20240605_151442
β βββ predictions
β β βββ internlm2-chat-1.8b-turbomind
β β βββ llama-3-8b-instruct-lmdeploy
β β βββ qwen1.5-1.8b-chat-hf
β βββ results
β β βββ internlm2-chat-1.8b-turbomind_judged-by--GPT-3.5-turbo-0125
β β βββ llama-3-8b-instruct-lmdeploy_judged-by--GPT-3.5-turbo-0125
β β βββ qwen1.5-1.8b-chat-hf_judged-by--GPT-3.5-turbo-0125
βΒ Β βββ summary
βΒ Β βββ 20240605_205020 # MEMORY_SUMMARY_DIR
βΒ Β βββ judged-by--GPT-3.5-turbo-0125-charm-memory-Chinese_Anachronisms_Judgment
βΒ Β βββ judged-by--GPT-3.5-turbo-0125-charm-memory-Chinese_Movie_and_Music_Recommendation
βΒ Β βββ judged-by--GPT-3.5-turbo-0125-charm-memory-Chinese_Sport_Understanding
βΒ Β βββ judged-by--GPT-3.5-turbo-0125-charm-memory-Chinese_Time_Understanding
βΒ Β βββ judged-by--GPT-3.5-turbo-0125.csv # MEMORY_SUMMARY_CSV
βββ CHARM_rea
βββ chat
βββ 20240605_152359
βββ predictions
β βββ internlm2-chat-1.8b-turbomind
β βββ llama-3-8b-instruct-lmdeploy
β βββ qwen1.5-1.8b-chat-hf
βββ results # REASON_RESULTS_DIR
β βββ internlm2-chat-1.8b-turbomind
β βββ llama-3-8b-instruct-lmdeploy
β βββ qwen1.5-1.8b-chat-hf
βββ summary
βββ summary_20240605_205328.csv # REASON_SUMMARY_CSV
βββ summary_20240605_205328.txt
```
### 4. Generate Analysis Results
```bash
cd ${path_to_CHARM_repo}
# generate Table5, Table6, Table9 and Table10 in https://arxiv.org/abs/2403.14112
PYTHONPATH=. python tools/summarize_reasoning.py ${REASON_SUMMARY_CSV}
# generate Figure3 and Figure9 in https://arxiv.org/abs/2403.14112
PYTHONPATH=. python tools/summarize_mem_rea.py ${REASON_SUMMARY_CSV} ${MEMORY_SUMMARY_CSV}
# generate Table7, Table12, Table13 and Figure11 in https://arxiv.org/abs/2403.14112
PYTHONPATH=. python tools/analyze_mem_indep_rea.py data/CHARM ${REASON_RESULTS_DIR} ${MEMORY_SUMMARY_DIR} ${MEMORY_SUMMARY_CSV}
```
## ποΈ Citation
```bibtex
@misc{sun2024benchmarking,
title={Benchmarking Chinese Commonsense Reasoning of LLMs: From Chinese-Specifics to Reasoning-Memorization Correlations},
author={Jiaxing Sun and Weiquan Huang and Jiang Wu and Chenya Gu and Wei Li and Songyang Zhang and Hang Yan and Conghui He},
year={2024},
eprint={2403.14112},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## π³ License
This project is released under the Apache 2.0 [license](./LICENSE).