Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/fminference/flexigen
Running large language models on a single GPU for throughput-oriented scenarios.
https://github.com/fminference/flexigen
deep-learning gpt-3 high-throughput large-language-models machine-learning offloading opt
Last synced: 29 days ago
JSON representation
Running large language models on a single GPU for throughput-oriented scenarios.
- Host: GitHub
- URL: https://github.com/fminference/flexigen
- Owner: FMInference
- License: apache-2.0
- Created: 2023-02-15T21:18:53.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-10-08T06:31:22.000Z (about 1 month ago)
- Last Synced: 2024-10-15T04:02:32.419Z (29 days ago)
- Topics: deep-learning, gpt-3, high-throughput, large-language-models, machine-learning, offloading, opt
- Language: Python
- Homepage:
- Size: 37.1 MB
- Stars: 9,166
- Watchers: 112
- Forks: 546
- Open Issues: 57
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# FlexGen: High-throughput Generative Inference of Large Language Models with a Single GPU [[paper](https://arxiv.org/abs/2303.06865)]
FlexGen is a high-throughput generation engine for running large language models with limited GPU memory. FlexGen allows **high-throughput** generation by IO-efficient offloading, compression, and **large effective batch sizes**.
## Motivation
In recent years, large language models (LLMs) have shown great performance across a
wide range of tasks. Increasingly, LLMs have been applied not only to interactive
applications (such as chat), but also to many "back-of-house" tasks.
These tasks include benchmarking, information extraction, data wrangling, and form processing.One key characteristic of these applications is that they are **throughput-oriented**: they require
running LLM inferences over millions of tokens in batches, e.g., all the private documents in a company's
corpus, or all the tasks in the [HELM](https://crfm.stanford.edu/helm/latest/) benchmark.
These workloads are less sensitive to latency - the user starts up a job and lets it run overnight -
but increasing throughput is critical for reducing costs.
Throughput is a measure of tokens processed per second over the job's entire runtime (which can be hours).
Throughput-oriented workloads provide opportunities to trade off latency for higher throughput, which
makes it easier to take advantage of low-cost commodity GPUs.The goal of FlexGen is to create a high-throughput system to enable new and exciting applications of
foundation models to throughput-oriented tasks on low-cost hardware, such as a single commodity GPU
instead of expensive systems.Check out the [examples](#examples) of what you can run on a single commodity GPU with FlexGen, including benchmarking and data wrangling.
❌ **Limitation**. As an offloading-based system running on weak GPUs, FlexGen also has its limitations.
FlexGen can be significantly slower than the case when you have enough powerful GPUs to hold the whole model, especially for small-batch cases.
FlexGen is mostly optimized for throughput-oriented batch processing settings (e.g., classifying or extracting information from many documents in batches), on single GPUs.----------
This project was made possible thanks to a collaboration with
----------
## Content
- [Installation](#installation)
- [Usage and Examples](#usage-and-examples)
- [Get Started with a Single GPU](#get-started-with-a-single-gpu)
- [Run HELM Benchmark with FlexGen](#run-helm-benchmark-with-flexgen)
- [Run Data Wrangling Tasks with FlexGen](#run-data-wrangling-tasks-with-flexgen)
- [Scaling to Distributed GPUs](#scaling-to-distributed-gpus)
- [API Example](#api-example)
- [Frequently Asked Questions](#frequently-asked-questions)
- [Performance Results](#performance-results)
- [How It Works](#how-it-works)
- [Roadmap](#roadmap)## Installation
Requirements:
- PyTorch >= 1.12 [(Help)](https://pytorch.org/get-started/locally/)### Method 1: With pip
```
pip install flexgen
```### Method 2: From source
```
git clone https://github.com/FMInference/FlexGen.git
cd FlexGen
pip install -e .
```## Usage and Examples
### Get Started with a Single GPU
#### OPT-1.3B
To get started, you can try a small model like OPT-1.3B first. It fits into a single GPU so no offloading is required.
FlexGen will automatically download weights from Hugging Face.
```
python3 -m flexgen.flex_opt --model facebook/opt-1.3b
```You should see some text generated by OPT-1.3B and the benchmark results.
#### OPT-30B
To run large models like OPT-30B, you will need to use CPU offloading. You can try commands below.
The `--percent` argument specifies the offloading strategy for parameters, attention cache and hidden states separately.
The exact meaning of this argument can be found [here](https://github.com/FMInference/FlexGen/blob/9d092d848f106cd9eaf305c12ef3590f7bcb0277/flexgen/flex_opt.py#L1271-L1279).
```
python3 -m flexgen.flex_opt --model facebook/opt-30b --percent 0 100 100 0 100 0
```#### OPT-175B
To run OPT-175B, you need to download the weights from [metaseq](https://github.com/facebookresearch/metaseq/tree/main/projects/OPT) and convert the weights into Alpa [format](https://alpa.ai/tutorials/opt_serving.html#convert-opt-175b-weights-into-alpa-formats).
You can then try to offloading all weights to disk by
```
python3 -m flexgen.flex_opt --model facebook/opt-175b --percent 0 0 100 0 100 0 --offload-dir YOUR_SSD_FOLDER
```### Run HELM Benchmark with FlexGen
FlexGen can be integrated into [HELM](https://crfm.stanford.edu/helm), a language model benchmark framework, as its execution backend.
You can use the commands below to run a Massive Multitask Language Understanding (MMLU) [scenario](https://crfm.stanford.edu/helm/latest/?group=mmlu) with a single T4 (16GB) GPU and 200GB of DRAM.
```
pip install crfm-helm
python3 -m flexgen.apps.helm_run --description mmlu:model=text,subject=abstract_algebra,data_augmentation=canonical --pad-to-seq-len 512 --model facebook/opt-30b --percent 20 80 0 100 0 100 --gpu-batch-size 48 --num-gpu-batches 3 --max-eval-instance 100
```
Note that only a subset of HELM scenarios is tested. See more tested scenarios [here](flexgen/apps/helm_passed_30b.sh).### Run Data Wrangling Tasks with FlexGen
You can run the examples in this paper, ['Can Foundation Models Wrangle Your Data?'](https://arxiv.org/abs/2205.09911), by following the instructions [here](flexgen/apps/data_wrangle).### Scaling to Distributed GPUs
If you have multiple machines with GPUs, FlexGen can combine offloading with pipeline parallelism to allow scaling.
For example, if you have 2 GPUs but the aggregated GPU memory is less than the model size, you still need offloading. FlexGen allow you to do pipeline parallelism with these 2 GPUs to accelerate the generation.
But to have scaled performance, you should have GPUs on distributed machines.
See examples [here](https://github.com/FMInference/FlexGen/tree/main/benchmark/flexgen#distributed-gpus).### API Example
We demonstrate the usage of FlexGen API in [completion.py](flexgen/apps/completion.py).
This example shows how to run generation for two sentences.
To get the best throughput out of FlexGen, you typically need to batch more sentences.#### Generation API
FlexGen has a generation API following the style of Hugging Face's transformers.
```python
output_ids = model.generate(
input_ids,
do_sample=True,
temperature=0.7,
max_new_tokens=32,
stop=stop)
```#### Example Commands
You can use the example commands below.
If you do not have enough GPU/CPU memory, see the [Handle Out-Of-Memory](#handle-out-of-memory) section.```
# Complete with OPT-6.7B. You need at least 15GB of GPU memory.
python3 -m flexgen.apps.completion --model facebook/opt-6.7b
``````
# Complete with OPT-30B. You need about 90GB of CPU memory.
python3 -m flexgen.apps.completion --model facebook/opt-30b --percent 0 100 100 0 100 0
``````
# Complete with instruction-tuned OPT-IML-MAX-30B. You need about 90GB of CPU memory.
python3 -m flexgen.apps.completion --model facebook/opt-iml-max-30b --percent 0 100 100 0 100 0
```### Frequently Asked Questions
#### How to set the offloading strategy and `--percent`?
We will release an automatic policy optimizer later, but now you have to manually try a few strategies.
The idea of high-throughput generation is to offload parameters and attention cache as much as possible to the CPU and disk if necessary.
You can see the reference strategies in our benchmark [here](https://github.com/FMInference/FlexGen/blob/9d092d848f106cd9eaf305c12ef3590f7bcb0277/benchmark/flexgen/bench_suite.py#L39-L79).
To avoid out-of-memory, you can tune the `--percent` to offload more tensors to the CPU and disk.#### How to handle out-of-memory?
If you do not have enough GPU/CPU memory, here are a few things you can try.
They save more memory but run slower.- Do not pin weights by adding `--pin-weight 0`. This can reduce the weight memory usage on CPU by around 20% or more.
- Enable weight compression by adding `--compress-weight`. This can reduce the weight memory usage by around 70%.
- Offload all weights to disk by using `--percent 0 0 100 0 100 0`. This requires very little CPU and GPU memory.## Performance Results
### Generation Throughput (token/s)
The corresponding effective batch sizes and lowest offloading devices are in parentheses. Please see [here](benchmark/batch_size_table.md) for more details.
| System | OPT-6.7B | OPT-30B | OPT-175B |
| ------ | -------- | ------- | -------- |
| Hugging Face Accelerate | 25.12 (2 on GPU) | 0.62 (8 on CPU) | 0.01 (2 on disk) |
| DeepSpeed ZeRO-Inference | 9.28 (16 on CPU) | 0.60 (4 on CPU) | 0.01 (1 on disk) |
| Petals | 8.25 (2 on GPU) | 2.84 (2 on GPU) | 0.08 (2 on GPU) |
| FlexGen | 25.26 (2 on GPU) | 7.32 (144 on CPU) | 0.69 (256 on disk) |
| FlexGen with Compression | **29.12** (72 on GPU) | **8.38** (512 on CPU) | **1.12** (144 on CPU) |- Hardware: an NVIDIA T4 (16GB) instance on GCP with 208GB of DRAM and 1.5TB of SSD.
- Workload: input sequence length = 512, output sequence length = 32. The batch size is tuned to **a large value** that maximizes the generation throughput for each system.
- Metric: generation throughput (token/s) = number of the generated tokens / (time for processing prompts + time for generation).How to [reproduce](benchmark/flexgen).
### Latency-Throughput Trade-Off
The figure below shows the latency and throughput trade-off of three offloading-based systems on OPT-175B (left) and OPT-30B (right).
FlexGen achieves a new Pareto-optimal frontier with significantly higher maximum throughput for both models.
Other systems cannot further increase throughput due to out-of-memory.
"FlexGen(c)" is FlexGen with compression.## How It Works
FlexGen can be flexibly configured under various hardware resource constraints by aggregating memory and computation from the GPU, CPU, and disk. Through a linear programming optimizer, it searches for the best pattern to store and access the tensors, including weights, activations, and attention key/value (KV) cache. FlexGen further compresses both weights and KV cache to 4 bits with negligible accuracy loss.One key idea of FlexGen is to play the latency-throughput trade-off. Achieving low latency is inherently challenging for offloading methods,
but the I/O efficiency of offloading can be greatly boosted for throughput-oriented scenarios (see the figure above).
FlexGen utilizes a block schedule to reuse weight and overlap I/O with computation, as shown in figure (b) below, while other baseline systems use an inefficient row-by-row schedule, as shown in figure (a) below.More technical details see our [paper](https://arxiv.org/abs/2303.06865).
## Roadmap
We plan to work on the following features.- [ ] Optimize the performance for multiple GPUs on the same machine
- [ ] Support more models (BLOOM, CodeGen, GLM)
- [X] Release the cost model and policy optimizer
- [ ] Macbook Support (M1 and M2)
- [ ] AMD Support