Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/FranxYao/Long-Context-Data-Engineering
Implementation of paper Data Engineering for Scaling Language Models to 128K Context
https://github.com/FranxYao/Long-Context-Data-Engineering
Last synced: 3 months ago
JSON representation
Implementation of paper Data Engineering for Scaling Language Models to 128K Context
- Host: GitHub
- URL: https://github.com/FranxYao/Long-Context-Data-Engineering
- Owner: FranxYao
- Created: 2024-01-31T08:39:03.000Z (9 months ago)
- Default Branch: main
- Last Pushed: 2024-03-19T03:57:10.000Z (8 months ago)
- Last Synced: 2024-06-14T01:58:23.140Z (5 months ago)
- Language: Python
- Size: 4.32 MB
- Stars: 370
- Watchers: 8
- Forks: 24
- Open Issues: 11
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- StarryDivineSky - FranxYao/Long-Context-Data-Engineering
README
# Long-Context Data Engineering
ChatGPT-4 Dalle-3 Prompt: "Draw a carton style logo showing a very very long paper"
🤗 HF Repo • 📃 Paper • 💿 DataImplementation of paper:
* Yao Fu, Rameswar Panda, Xinyao Niu, Xiang Yue, Hannaneh Hajishirzi, Yoon Kim and Hao Peng. Feb 2024. _Data Engineering for Scaling Language Models to 128K Context_
Our model is the first public work showing how to achieve GPT-4 level long-context retrieval performance.## Table of Content
- [x] Loading and playing with the following continue pretrained checkpoint:
- [x] LLaMA-2 7B 80K: continue pretrained on 80K, tested on 128K
- [x] LLaMA-2 13B 64K: continue pretrained on 64K, tested on 128K
- [x] Evaluating the pretrained checkpoint on Needle-in-a-HayStack
- [x] Loading the preprocessed data
- [x] Processing the long-context data
- [ ] Continue pretraining the model on processed long-context data## Download the model to local
Create a folder to download the model.
```bash
pip install -r requirements.txt # pytorch is not included here because we assume you have already installed pytorch
mkdir ../llama-2-7b-80k
mkdir ../llama-2-13b-64k
```Download the continue pretrained checkpoint to local
```python
from huggingface_hub import snapshot_downloadsnapshot_download(repo_id='yaofu/llama-2-7b-80k',
local_dir='../llama-2-7b-80k',
repo_type='model',
local_dir_use_symlinks=False,
resume_download=True)snapshot_download(repo_id='yaofu/llama-2-13b-64k',
local_dir='../llama-2-13b-64k',
repo_type='model',
local_dir_use_symlinks=False,
resume_download=True)
```We recommend you download the checkpoint to local first, instead of directly loading from HF, like the following:
```python
from transformers import AutoModelForCausalLM
# Below is slow and hard to control in a cluster
# Unless you insist, **we recommend you download the model to local first**
model = AutoModelForCausalLM.from_pretrained("yaofu/llama-2-7b-80k",
use_flash_attention_2="flash_attention_2",
torch_dtype=torch.bfloat16
)
```## Load the continue pretrained checkpoint and play with it
The following code requries at least 8x4090 to support 80K context.
If you have 4x80G A100 you can make it to at least 128KWe use `tensor_parallel` implemented from [this repo](https://github.com/BlackSamorez/tensor_parallel) because it is much faster than huggingface's `device_map` and lightweight than vLLM. But it has a small bug that if your GPU memory is not large enough, it will stuck instead of through a memory overflow exception. So make sure you do have enough GPU memory.
```python
import torch
import tensor_parallel as tp
from transformers import AutoModelForCausalLM, AutoTokenizer
from eval.needle.utils import load_context, insert_needle# This is the continue pretrained LLaMA 2 7B model with modified rope
def reset_rope(model, model_max_train_len, scaling_factor):
for l in model.model.layers:
l.self_attn.rotary_emb.scaling_factor = scaling_factor
l.self_attn.rotary_emb._set_cos_sin_cache(seq_len=model_max_train_len, device="cpu", dtype=torch.float32)
return
model = AutoModelForCausalLM.from_pretrained("../llama-2-7b-80k",
use_flash_attention_2="flash_attention_2",
torch_dtype=torch.bfloat16
) # requires about 14G disk size in $HF_HOME
scaling_factor = 10 # hardcode here
reset_rope(model, model_max_train_len=81920, scaling_factor=scaling_factor)
model = tp.tensor_parallel(model, sharded=True)# Construct the Needle-in-a-HayStack Prompt
needle = "\nThe best thing to do in San Francisco is eat a sandwich and sit in Dolores Park on a sunny day.\n"
ctx_len = 100000 # need at least 8*4090 to run this length
depth = 0.5
context = load_context(fpath="eval/needle/PaulGrahamEssays/*.txt", ctx_len=ctx_len)
context = insert_needle(context, needle, depth=depth)
needle_idx = context.find("The best thing to do in San Francisco is")
print("Context has %d chars, needle inserted at %d char location:\n" % (len(context), needle_idx))
print(context[needle_idx - 150: needle_idx + 150]) # look at how the needle is insertedprompt ="\n<|im_start|> This is a very long story book: %s .\n" % context
question = "What is the best thing to do in San Francisco?"
prompt += "Based on the content of the book, Question: %s\nAnswer:" % question
print(prompt) # feel the length of 100K# Check how the model performs
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
prompt = tokenizer(prompt, return_tensors="pt")
input_ids = prompt['input_ids'].to(model.device)
print("After tokenization, there is %d tokens" % len(input_ids[0]))
with torch.no_grad():
output_ids = model.generate(input_ids, max_new_tokens=50)
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True).strip()
print("Response:", response.split("\n")[0])
```## Evaluate the pretrained checkpoint on the Needle-in-a-Haystack test
The evaluation requires 4*80G A100, and takes about/ less than 24 hours to finish.
The inference code can be further optimized by optimizing the tokenizer speed (tokenizing a document of 100K tokens takes a lot of time), though we leave it to future work.
```bash
cd eval/needle
mkdir logs img results(
python -u needle_in_haystack.py --s_len 0 --e_len 128000\
--model_provider LLaMA\
--model_path ../../../llama-2-7b-80k
) 2>&1 | tee logs/eval_llama-2-7b-80k.logpython visualize.py
```## Evaluate the pretrained checkpoint on the BookQA dataset from InfiniBench
Code and data adapted from [InfiniBench](https://github.com/OpenBMB/InfiniteBench/tree/main) original author```bash
cd eval/book
mkdir data
```
Then download `longbook_qa_eng.json` from [here](https://drive.google.com/drive/folders/1IkfRudRr180CbqOpa5PtSHYW4__XGUpH?usp=sharing) and put it under the `data` folder.```bash
(
python -u eval_book.py --task longbook_qa_eng\
--verbose\
--model_path ../../../llama-2-7b-80k\
--data_dir data\
--model_name llama\
--truncate 128000
) 2>&1 | tee logs/eval_llama_7b_80k_test_to_128k.log
```
Caveat: there are two versions of longbook_qa_eng
* The original version was uploaded by the InfiniBench author at [this commit](https://huggingface.co/datasets/xinrongzhang2022/InfiniteBench/commit/c583fe67832c26f6094515dbe6c3c26c28d840ee)
* Recently the author updated the data at [this commit](https://huggingface.co/datasets/xinrongzhang2022/InfiniteBench/commit/f2fd8f04ea3af8304b88de2c58bd33887bcccdb8). Consequently if you download Infinibench from HF directly you will be use different data than we use.
* Here we upload the version we used for the paper under our `data` folder. This will incease the risk of this dataset being exposed to future LLM training. Hope by that time we already have a better long context eval :)## Load the preprocessed data
The following code requires 60G disk size in the `$HF_CACHE` folder. The data is processed from [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B) using per-source length-upsampling described in our paper section 3. We have already tokenized and chunked the data in the following format:```python
import datasets
from transformers import AutoTokenizer
dataset = datasets.load_dataset("yaofu/slimpajama-per-source-length-upsample")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")d = dataset["train"][0]
print(d.keys())
print(d["source"])
print(len(d["input_ids"])) ## all input_ids are chunks of length 131072doc_id = 0
doc_start, doc_end = d["source"][doc_id]["start"], d["source"][doc_id]["end"]
print(tokenizer.decode(d["input_ids"][doc_start: doc_end]))doc_id = 1
doc_start, doc_end = d["source"][doc_id]["start"], d["source"][doc_id]["end"]
print(tokenizer.decode(d["input_ids"][doc_start: doc_end]))
```Alternatively, you may use the `streaming=True` mode to avoid the long downloading time.
But we do recommend downloading the model first because it will save a lot of time when you load the dataset at the second time.
```python
import datasets
from transformers import AutoTokenizer
dataset = datasets.load_dataset("yaofu/slimpajama-per-source-length-upsample", streaming=True)
it = iter(dataset["train"])
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")d = next(it)
print(d.keys())
print(d["source"])
print(len(d["input_ids"])) ## all input_ids are chunks of length 131072doc_id = 0
doc_start, doc_end = d["source"][doc_id]["start"], d["source"][doc_id]["end"]
print(tokenizer.decode(d["input_ids"][doc_start: doc_end]))doc_id = 1
doc_start, doc_end = d["source"][doc_id]["start"], d["source"][doc_id]["end"]
print(tokenizer.decode(d["input_ids"][doc_start: doc_end]))
```## Generate the per-source length upsampled data
We recommend first download the SlimPajama data to local. First make a folder
```bash
mkdir ../SlimPajama-627B
```Then download. This requires about 1.8T disk size and takes quite a while to download. Remember that this is not finetuning, so be patient.
```python
from huggingface_hub import snapshot_downloadsnapshot_download(repo_id='cerebras/SlimPajama-627B',
local_dir='../SlimPajama-627B',
repo_type='dataset',
local_dir_use_symlinks=False,
resume_download=True)
```Then generate the per-source length upsampled data. In our practice we down-sample sequences shorter than 4K.
Note that this is equivalent to upsampling sequences longer than 4K.
We use multi-processing: there are 200 tokenizer process, a read process (which is also the main process) and a write process.
The main process reads the data streamingly, then asks which tokenizer process is free.
If there is a free tokenizer process, it assigns the current document to that process, otherwise it waits and keeps asking.
A tokenizer process receives the document from the main process, tokenizes it, then sends the tokens to the writer process.
The writer process continuously receives the tokenized data from all tokenizer processes, and writes them into a .jsonl file.
The following code requries about 200 CPU cores, 50G CPU memory. Tokenizing 5B tokens takes about 1 hour.
If you do not use multi-processing like we do, you will need about two days for tokenization.
```bash
mkdir logs
mkdir data
mkdir data/slimpajama
mkdir data/slimpajama/per_source_downsample
cd data_engineeringPATH_TO_SLIMPAJAMA=../SlimPajama-627B
nohup python -u slimpajama_packing.py\
--dataset_size=100m\
--print_interval=100 --num_process=200\
--dataset_path=$PATH_TO_SLIMPAJAMA\
--output_path=../data/slimpajama/per_source_downsample/ --down_sample_ratio=0.1 --down_sample_mode=per_source\
> ../logs/slimpajama_packing_dist_per_source_downsample_0.1.log 2>&1 &
tail -f ../logs/slimpajama_packing_dist_per_source_downsample_0.1.log
```
The `--dataset_size 100m` is for a quick demo. Change it to `--dataset_size 5B` to reproduce our training data.