Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/THUDM/LongCite
LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA
https://github.com/THUDM/LongCite
benchmark citation-generation fine-tuning llm long-context
Last synced: 19 days ago
JSON representation
LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA
- Host: GitHub
- URL: https://github.com/THUDM/LongCite
- Owner: THUDM
- License: apache-2.0
- Created: 2024-08-31T13:53:10.000Z (6 months ago)
- Default Branch: main
- Last Pushed: 2024-12-31T03:25:42.000Z (about 2 months ago)
- Last Synced: 2025-01-26T16:01:44.441Z (26 days ago)
- Topics: benchmark, citation-generation, fine-tuning, llm, long-context
- Language: Python
- Homepage:
- Size: 15.2 MB
- Stars: 453
- Watchers: 11
- Forks: 33
- Open Issues: 9
-
Metadata Files:
- Readme: README.md
- License: LICENSE.txt
Awesome Lists containing this project
- StarryDivineSky - THUDM/LongCite - glm4-9b 和 LongCite-llama3.1-8b,它们分别基于 GLM-4-9B 和 Meta-Llama-3.1-8B 进行训练,并支持高达 128K 的上下文。这两个模型指向了我们论文中的“LongCite-9B”和“LongCite-8B”模型。给定基于长上下文的查询,这些模型可以生成准确的响应和精确的句子级引用,使用户可以轻松验证输出信息。 (A01_文本生成_文本对话 / 大语言对话模型及数据)
README
![]()
# LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA
🤗 HF Repo • 📃 Paper • 🚀 HF Space[English](./README.md) | [中文](./README_zh.md)
https://github.com/user-attachments/assets/68f6677a-3ffd-41a8-889c-d56a65f9e3bb
## 🔍 Table of Contents
- [⚙️ LongCite Deployment](#deployment)
- [🤖️ CoF pipeline](#pipeline)
- [🖥️ Model Training](#training)
- [📊 Evaluation](#evaluation)
- [📝 Citation](#citation)**Environmental Setup**:
We recommend using `transformers>=4.43.0` to successfully deploy our models.We open-source two models: [LongCite-glm4-9b](https://huggingface.co/THUDM/LongCite-glm4-9b) and [LongCite-llama3.1-8b](https://huggingface.co/THUDM/LongCite-llama3.1-8b), which are trained based on [GLM-4-9B](https://huggingface.co/THUDM/glm-4-9b) and [Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B), respectively, and support up to 128K context. These two models point to the "LongCite-9B" and "LongCite-8B" models in our paper. Given a long-context-based query, these models can generate accurate responses and precise sentence-level citations, making it easy for users to verify the output information. Try the model:
```python
import json
import torch
from transformers import AutoTokenizer, AutoModelForCausalLMtokenizer = AutoTokenizer.from_pretrained('THUDM/LongCite-glm4-9b', trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained('THUDM/LongCite-glm4-9b', torch_dtype=torch.bfloat16, trust_remote_code=True, device_map='auto')context = '''
W. Russell Todd, 94, United States Army general (b. 1928). February 13. Tim Aymar, 59, heavy metal singer (Pharaoh) (b. 1963). Marshall \"Eddie\" Conway, 76, Black Panther Party leader (b. 1946). Roger Bonk, 78, football player (North Dakota Fighting Sioux, Winnipeg Blue Bombers) (b. 1944). Conrad Dobler, 72, football player (St. Louis Cardinals, New Orleans Saints, Buffalo Bills) (b. 1950). Brian DuBois, 55, baseball player (Detroit Tigers) (b. 1967). Robert Geddes, 99, architect, dean of the Princeton University School of Architecture (1965–1982) (b. 1923). Tom Luddy, 79, film producer (Barfly, The Secret Garden), co-founder of the Telluride Film Festival (b. 1943). David Singmaster, 84, mathematician (b. 1938).
'''
query = "What was Robert Geddes' profession?"
result = model.query_longcite(context, query, tokenizer=tokenizer, max_input_length=128000, max_new_tokens=1024)print("Answer:\n{}\n".format(result['answer']))
print("Statement with citations:\n{}\n".format(
json.dumps(result['statements_with_citations'], indent=2, ensure_ascii=False)))
print("Context (divided into sentences):\n{}\n".format(result['splited_context']))
```
You may deploy your own LongCite chatbot (like the one we show in the above video) by running
```
CUDA_VISIBLE_DEVICES=0 streamlit run demo.py --server.fileWatcherType none
```
Alternatively, you can deploy the model with [vllm](https://github.com/vllm-project/vllm), which allows faster generation and multiconcurrent server. See the code example in [vllm_inference.py](https://github.com/THUDM/LongCite/blob/main/vllm_inference.py).
## 🤖️ CoF Pipeline
We are also open-sourcing CoF (Coarse to Fine) under `CoF/`, our automated SFT data construction pipeline for generating high-quality long-context QA instances with fine-grained citations. Please configure your API key in the `utils/llm_api.py`, then run the following four scripts to obtain the final data:
`1_qa_generation.py`, `2_chunk_level_citation.py`, `3_sentence_level_citaion.py`, and `4_postprocess_and_filter.py`.You can download and save the **LongCite-45k** dataset through the Hugging Face datasets ([🤗 HF Repo](https://huggingface.co/datasets/THUDM/LongCite-45k)):
```python
from datasets import load_dataset
dataset = load_dataset('THUDM/LongCite-45k')
for split, split_dataset in dataset.items():
split_dataset.to_json("train/long.jsonl")
```
You can mix it with general SFT data such as [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/tree/main/HTML_cleaned_raw_dataset). We adopt [Metragon-LM](https://github.com/NVIDIA/Megatron-LM) for model training. For a more lightweight implementation, you may adopt the code and environment from [LongAlign](https://github.com/THUDM/LongAlign), which can support a max training sequence length of 32k tokens for GLM-4-9B and Llama-3.1-8B.
## 📊 Evaluation
We introduce an automatic benchmark: **LongBench-Cite**, which adopt long-context QA pairs from [LongBench](https://github.com/THUDM/LongBench) and [LongBench-Chat](https://github.com/THUDM/LongAlign), to measure the citation quality as well as response correctness in long-context QA scenarios.We provide our evaluation data and code under `LongBench-Cite/`. Run `pred_sft.py` and `pred_one_shot.py` to get responses from fine-tuned models (e.g., LongCite-glm4-9b) and normal models (e.g., GPT-4o). Then run `eval_cite.py` and `eval_correct.py` to evaluate the citation quality and response correctness. Remember to configure your OpenAI API key in `utils/llm_api.py` since we adopt GPT-4o as the judge.
Here are the evaluation results on **LongBench-Cite**:
If you find our work useful, please consider citing LongCite:
```
@article{zhang2024longcite,
title = {LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA}
author={Jiajie Zhang and Yushi Bai and Xin Lv and Wanjun Gu and Danqing Liu and Minhao Zou and Shulin Cao and Lei Hou and Yuxiao Dong and Ling Feng and Juanzi Li},
journal={arXiv preprint arXiv:2409.02897},
year={2024}
}
```