Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/MozerWang/Loong

[EMNLP 2024 (Oral)] Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA
https://github.com/MozerWang/Loong

Last synced: 3 days ago
JSON representation

[EMNLP 2024 (Oral)] Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA

Awesome Lists containing this project

README

        

Loong

  Loong: Benchmarking Long-Context LLMs with Extended Multi-Doc QA


GitHub
GitHub top language
GitHub last commit


📃


💻


🤗


## 👀Overview
This repository contains code for our paper [Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA](https://arxiv.org/abs/2406.17419). We propose a novel long-context benchmark, 🐉 **Loong**, aligning with realistic scenarios through extended multi-document question answering (QA). Loong typically consists of 11 documents per test instance on average, spanning three real-world scenarios in English and Chinese: (1) *Financial Reports*, (2) *Legal Cases*, and (3) *Academic Papers*. Meanwhile, Loong introduces new evaluation tasks from the perspectives of *Spotlight Locating*, *Comparison*, *Clustering*, and *Chain of Reasoning*, to facilitate a more realistic and comprehensive evaluation of long-context understanding. Furthermore, Loong features inputs of varying lengths (e.g., *10K-50K*, *50K-100K*, *100K-200K*, *beyond 200K*) and evaluation tasks of diverse difficulty, enabling fine-grained assessment of LLMs across different context lengths and task complexities.
> *Please find more details of this work in our paper.*

![Overview of Loong](assets/main_fig.jpg)
> Showcase of four evaluation tasks in Loong (\...\ marks the content of the i-th document). (a) *Spotlight Locating*: Locate the evidence. (b) *Comparison*: Locate and compare the evidence. (c) *Clustering*: Locate and cluster the evidence into groups. (d) *Chain of Reasoning*: Locate and reasoning along a logical chain.

## 📰News
`[2024-09-20]` 📰Our paper has been accepted to the EMNLP Main Conference.

`[2024-07-30]` 🤖The performance of phi-3, llama-3.1-8B, gpt-4o-mini on Loong are updated.

`[2024-07-03]` 🔥The code and benchmark are releasing. If you encounter any issues, please feel free to contact us.

`[2024-06-25]` 👨‍💻The code is currently being refined, and we plan to release the evaluation code and benchmark within the next one or two weeks. If you encounter any issues, please feel free to contact me at [email protected].

## 🏆Leaderboard



Models
Claimed Length
Spotlight Locating
Comparison
Clustering
Chain of Reason
Overall




Gemini-1.5-pro
1000K
75.020.56
49.940.27
44.100.09
64.970.37
55.370.27


GPT-4o
128K
73.950.62
50.500.28
44.290.09
57.950.28
53.470.26


Claude3.5-Sonnet
200K
58.450.49
54.210.35
45.770.07
43.920.25
48.850.23


Claude3-Haiku
200K
68.680.59
42.100.21
35.040.02
47.590.17
44.880.19


Qwen2-72B-Instruct
128K
54.170.36
42.380.20
36.710.04
47.760.18
43.290.15


GPT-4o-mini
128K
53.120.41
44.270.20
32.580.04
52.340.23
42.950.18


GLM4-9B-Chat
1000K
57.350.47
40.380.20
28.520.02
39.940.16
38.310.16


Kimi-Chat
200K
60.980.50
34.740.13
28.760.04
38.520.15
37.490.16


Llama-3.1-8B-Instruct
128K
59.960.46
35.730.18
27.830.01
35.590.14
36.310.14


Phi-3-small
128K
29.230.10
20.120.06
17.530.00
14.360.01
19.030.03


Phi-3-mini
128K
25.650.15
13.340.04
12.000.00
12.610.01
14.540.04

> Overall results on four evaluation tasks. For each task, the indicator on the left represents the **_Avg Scores`(0~100)`_**, while the right one represents the **_Perfect Rate`(0~1)`_**.



Model
Claimed Length
Spotlight Locating
Comparison
Clustering
Chain of Reasoning
Overall




Set1 (10K-50K)


GPT-4o
128K
85.670.81
64.270.33
57.010.24
81.580.55
70.400.44


Claude3.5-Sonnet
200K
60.850.55
69.070.47
58.630.13
68.570.50
63.690.37


Gemini-1.5-pro
1000K
75.000.60
54.880.28
56.150.23
70.640.37
63.360.34


GPT-4o-mini
128K
62.490.56
65.480.40
45.810.12
79.850.55
62.420.36


Qwen2-72B-Instruct
200K
68.490.55
60.600.37
47.080.08
70.390.36
60.110.29


Claude3-Haiku
200K
60.940.55
59.970.40
45.530.04
66.850.34
57.140.28


Kimi-Chat
200K
81.110.74
46.700.20
47.840.07
53.770.17
55.020.24


GLM4-9B-Chat
1000K
63.110.53
54.100.27
39.500.08
56.320.28
51.430.25


Llama-3.1-8B-Instruct
128K
67.910.57
41.620.20
36.550.04
54.740.34
48.100.24


Phi-3-mini
128K
46.130.30
22.180.05
19.300.02
20.440.03
24.580.07


Phi-3-small
128K
35.000.15
26.830.12
17.010.00
15.870.00
21.440.05


Set2 (50K-100K)


GPT-4o
128K
86.760.72
59.810.40
47.830.11
62.090.34
58.380.29


Gemini-1.5-pro
1000K
76.500.57
54.510.34
44.580.09
64.870.34
55.560.26


Claude3.5-Sonnet
200K
63.830.53
58.900.39
50.960.10
46.090.26
52.730.24


GPT-4o-mini
128K
63.540.46
51.480.26
36.560.04
56.510.25
47.740.19


Qwen2-72B-Instruct
128K
64.530.43
42.600.21
38.520.05
51.180.20
45.710.17


Claude3-Haiku
200K
73.710.66
41.900.22
36.180.02
50.200.15
45.450.17


Kimi-Chat
200K
72.820.52
46.770.21
33.460.06
40.510.15
42.400.16


Llama-3.1-8B-Instruct
128K
72.790.59
44.510.27
32.980.01
40.530.15
41.980.16


GLM4-9B-Chat
1000K
65.040.54
41.800.23
30.720.02
42.340.17
40.190.17


Phi-3-small
128K
34.170.16
22.080.08
20.510.01
16.200.01
21.400.04


Phi-3-mini
128K
44.710.29
22.810.09
16.370.00
15.390.01
20.840.05


Set3 (100K-200K)


Gemini-1.5-pro
1000K
81.250.56
44.660.20
39.900.05
58.380.36
52.050.24


GPT-4o
128K
74.840.65
42.400.21
38.700.04
45.060.09
46.950.19


Claude3.5-Sonnet
200K
65.360.56
50.320.34
37.790.03
25.950.11
42.060.19


Claude3-Haiku
200K
77.810.67
37.070.17
30.940.01
36.870.12
41.410.18


GPT-4o-mini
128K
58.270.49
33.460.09
27.330.01
35.670.04
35.630.11


Qwen2-72B-Instruct
128K
46.990.27
37.060.13
31.500.02
35.010.07
35.940.09


GLM4-9B-Chat
1000K
69.190.56
37.990.18
26.630.01
32.300.09
37.360.16


Kimi-Chat
200K
62.130.54
24.200.05
21.980.01
31.020.14
31.370.14


Llama-3.1-8B-Instruct
128K
60.050.46
25.860.11
21.960.00
19.140.02
28.410.10


Phi-3-small
128K
25.120.06
15.260.01
16.800.00
12.750.01
16.940.01


Phi-3-mini
128K
7.400.03
1.970.00
6.070.00
7.380.01
5.790.01


Set4 (200K-250K)


Gemini-1.5-pro
1000K
62.230.49
43.080.20
36.480.00
68.510.49
50.700.25


Claude3-Haiku
200K
53.260.40
27.000.03
25.360.00
28.110.05
32.150.10


GPT-4o
128K
36.790.19
23.970.08
30.400.00
32.890.07
31.110.07


Claude3.5-Sonnet
200K
36.910.24
28.820.05
28.680.00
28.770.08
30.510.08


Qwen2-72B-Instruct
128K
33.180.16
26.590.08
29.840.01
25.810.04
28.920.06


Llama-3.1-8B-Instruct
128K
31.720.13
27.270.10
15.170.00
22.890.02
22.510.05


GPT-4o-mini
128K
20.660.09
19.180.03
16.030.00
27.810.00
20.410.02


GLM4-9B-Chat
1000K
15.670.12
21.330.05
12.350.00
21.040.05
16.840.05


Kimi-Chat
200K
20.170.12
9.170.00
5.650.00
22.610.11
13.500.05


Phi-3-small
128K
22.360.02
16.430.05
11.500.00
10.350.00
14.270.01


Phi-3-mini
128K
5.210.00
2.200.00
3.450.00
2.580.00
3.380.00

> The performance of LLMs on four evaluation tasks with different length sets. For each task, the indicator on the left represents the **_Avg Scores`(0~100)`_**, while the right one represents the **_Perfect Rate`(0~1)`_**.

- Following previous work, we prompt GPT-4 as a judge to evaluate the model's output based on the golden answer and the question's requirements from three aspects: *Accuracy*, *Hallucinations*, and *Completeness*, scoring from 0 to 100. For a detailed prompt, please refer to our paper.
- We design two indicators: (1) **_Avg Scores_**: the average value of scores given by GPT-4 for all questions; (2) **_Perfect Rate_**: the proportion of cases scoring 100 out of the total cases. The latter is a more stringent evaluation metric compared to the former.
- We set `temperature = 0` to eliminate randomness and keep other hyper-parameters default. For API-Based LLMs, we directly utilize the official API for testing. Since the Kimi-Chat-200k currently does not provide an interface, we manually input content on the web. As for open-source models, we conduct experiments on a server with 8 $\times$ A100 80GB.

## 🔧Evaluate long-context LLMs
**Step1** Download Loong benchmark and docs
```shell
git clone https://github.com/MozerWang/Loong.git
cd Loong
wget -P data/ http://alibaba-research.oss-cn-beijing.aliyuncs.com/loong/doc.zip
unzip data/doc.zip -d data/
```

**Step2** Create a conda environment and Install other dependencies.
```shell
conda create --name loong python=3.9 -y
conda activate loong
pip install -r requirements.txt
```

**Step3** Preparing the Model

1. (**Must**) Set up your OPENAI key in config/models/gpt4.yaml
```shell
api_key: "Your OPENAI key"
```
2. If you are using API-based LLM
```shell
# Firstly, Set up your key in config/models/*.yaml
api_key: "Your API key"
```
3. If you are using Open-sourced LLM
```shell
# We recommend using vLLM. And we use HTTP server that implements OpenAI’s Completions and Chat API.
# We have provided using example for Qwen2 and GLM4. See details in Loong/src/vllm_eample.sh
cd src
sh vllm_example.sh
```

**Step4** Evaluate
```shell
cd src
sh run.sh
```

**Things To Know**
- We provide a complete evaluation process:
`step1_load_data.py` # Data loading
`step2_model_generate.py` # Model generation
`step3_model_evaluate.py` # GPT-4 evaluation
`step4_cal_metric.py` # Result statistics

- For `step2_model_generate.py`, you can design the model generation part yourself, modifying it to use your own model's inference method. Just make sure the input and output interfaces in `src/utils/generate.py` remain consistent:
```shell
# Input
generate(prompts, config, output_path, process_num, tag)

# Output
result = prompt.copy() # for prompt in prompts
result[tag] = response_content # Your LLM's response
with open(output_path, 'a', encoding='utf-8') as fw:
fw.write(json.dumps(result, ensure_ascii=False) + '\n')
```

- In `data/loong.jsonl`, the `level` key means task:
`level1` # Spotlight Locating
`level2` # Comparison
`level3` # Clustering
`level4` # Chain of Reasoning

## Citation
```
@inproceedings{wang2024loong,
title={Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA},
author={Minzheng Wang, Longze Chen, Cheng Fu, Shengyi Liao, Xinghua Zhang, Bingli Wu, Haiyang Yu, Nan Xu, Lei Zhang, Run Luo, Yunshui Li, Min Yang, Fei Huang, Yongbin Li},
year={2024}
booktitle ={Proceedings of EMNLP},
url={https://aclanthology.org/2024.emnlp-main.322},
pages={5627--5646},
}
```