Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
https://github.com/vllm-project/vllm
amd cuda gpt inference inferentia llama llm llm-serving llmops mlops model-serving pytorch rocm trainium transformer
Last synced: 3 months ago
JSON representation
A high-throughput and memory-efficient inference and serving engine for LLMs
- Host: GitHub
- URL: https://github.com/vllm-project/vllm
- Owner: vllm-project
- License: apache-2.0
- Created: 2023-02-09T11:23:20.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-03-23T00:56:34.000Z (3 months ago)
- Last Synced: 2024-03-23T01:17:11.541Z (3 months ago)
- Topics: amd, cuda, gpt, inference, inferentia, llama, llm, llm-serving, llmops, mlops, model-serving, pytorch, rocm, trainium, transformer
- Language: Python
- Homepage: https://docs.vllm.ai
- Size: 6.67 MB
- Stars: 16,467
- Watchers: 172
- Forks: 2,083
- Open Issues: 1,026
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
Lists
- awesome-production-machine-learning - vLLM - project/vllm.svg?style=social) - vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs (Model Serving and Monitoring)
- Awesome-LLM-Compression - [Code
- Awesome-LLM - vLLM - A high-throughput and memory-efficient inference and serving engine for LLMs. (LLM Deployment)
- awesome-ml-python-packages - vLLM
- awesome-stars - vllm-project/vllm - throughput and memory-efficient inference and serving engine for LLMs (Python)
- awesome-llm-list - vLLM
- awesome-stars - vllm-project/vllm - A high-throughput and memory-efficient inference and serving engine for LLMs (Python)
- awesome-stars - vllm-project/vllm - A high-throughput and memory-efficient inference and serving engine for LLMs (Python)
- awesome-stars - vllm-project/vllm - A high-throughput and memory-efficient inference and serving engine for LLMs (Python)
- awesome-LLM-resourses - vllm - throughput and memory-efficient inference and serving engine for LLMs. (推理)
- awesome-list - vllm-project/vllm - A high-throughput and memory-efficient inference and serving engine for LLMs (Python)
- awesome-llmops - vllm - throughput and memory-efficient inference and serving engine for LLMs. | ![GitHub stars](https://img.shields.io/github/stars/vllm-project/vllm.svg?style=flat-square) | (Serving / Large Model Serving)
- awesome-repositories - vllm-project/vllm - A high-throughput and memory-efficient inference and serving engine for LLMs (Python)
- awesome-huggingface - vLLM - source large language models. It also supports an OpenAI-compatible server. This allows to use the [OpenAI Python library](https://github.com/openai/openai-python) to interact with the model. (Software / vLLM)
- Awesome_Multimodel_LLM - vLLM - A high-throughput and memory-efficient inference and serving engine for LLMs (Tools for deploying LLM)
- my-awesome - vllm-project/vllm - A high-throughput and memory-efficient inference and serving engine for LLMs (Python)
- awesome-llm-deployment - vLLM - vLLM is a fast and easy-to-use library for LLM inference and serving. (Uncategorized / Uncategorized)
- awesome-genai - vllm-Efficient LLM Serving
- awesome-AI-system - vLLM System(Efficient Memory Management for Large Language Model Serving with PagedAttention SOSP'23)
- awesome-stars - vllm-project/vllm - A high-throughput and memory-efficient inference and serving engine for LLMs (Python)
- awesome-stars - vllm - throughput and memory-efficient inference and serving engine for LLMs | vllm-project | 21071 | (Python)
- AiTreasureBox - vllm-project/vllm - 06-12_20865_22](https://img.shields.io/github/stars/vllm-project/vllm.svg)|A high-throughput and memory-efficient inference and serving engine for LLMs| (Repos)
- awesome-stars - vllm-project/vllm - A high-throughput and memory-efficient inference and serving engine for LLMs (pytorch)
- awesome-stars - vllm-project/vllm - A high-throughput and memory-efficient inference and serving engine for LLMs (Python)
- awesome-stars - vllm-project/vllm - A high-throughput and memory-efficient inference and serving engine for LLMs (Python)
- my-awesome-stars - vllm-project/vllm - A high-throughput and memory-efficient inference and serving engine for LLMs (Python)
- awesome-llm-and-aigc - vllm-project/vllm - project/vllm?style=social"/> : A high-throughput and memory-efficient inference and serving engine for LLMs. [vllm.readthedocs.io](https://vllm.readthedocs.io/en/latest/) (Summary)
- awesome-stars - vllm-project/vllm - `★21166` A high-throughput and memory-efficient inference and serving engine for LLMs (Python)
- awesome-stars - vllm-project/vllm - A high-throughput and memory-efficient inference and serving engine for LLMs (Python)
- Awesome-LLMOps - vllm - project/vllm.svg?style=social) - A high-throughput and memory-efficient inference and serving engine for LLMs. (Large Model Serving)
- awesome-local-llms - vllm - throughput and memory-efficient inference and serving engine for LLMs | 20,855 | 2,882 | 1,145 | 358 | 27 | Apache License 2.0 | 0 days, 9 hrs, 13 mins | (Open-Source Local LLM Projects)
- awesome-stars - vllm - throughput and memory-efficient inference and serving engine for LLMs | vllm-project | 21385 | (Python)
- Awesome-LLM-Productization - vLLM - A high-throughput and memory-efficient inference and serving engine for LLMs (Models and Tools / LLM Deployment)
- awesome-cuda-tensorrt-fpga - vllm-project/vllm - project/vllm?style=social"/> : A high-throughput and memory-efficient inference and serving engine for LLMs. [vllm.readthedocs.io](https://vllm.readthedocs.io/en/latest/) (Frameworks)
- awesome-local-ai - vLLM - vLLM is a fast and easy-to-use library for LLM inference and serving. | GGML/GGUF | Both | ❌ | Python | Text-Gen | (Inference Engine)
- Awesome-LLM?tab=readme-ov-file - vLLM - A high-throughput and memory-efficient inference and serving engine for LLMs. (LLM Deployment)
- StarryDivineSky - vllm-project/vllm
- awesome-stars - vllm - throughput and memory-efficient inference and serving engine for LLMs | vllm-project | 20716 | (Python)
README
![]()
Easy, fast, and cheap LLM serving for everyone
| Documentation | Blog | Paper | Discord |---
**The Third vLLM Bay Area Meetup (April 2nd 6pm-8:30pm PT)**
We are thrilled to announce our third vLLM Meetup!
The vLLM team will share recent updates and roadmap.
We will also have vLLM collaborators from Roblox coming up to the stage to discuss their experience in deploying LLMs with vLLM.
Please register [here](https://robloxandvllmmeetup2024.splashthat.com/) and join us!---
*Latest News* 🔥
- [2024/01] We hosted [the second vLLM meetup](https://lu.ma/ygxbpzhl) in SF! Please find the meetup slides [here](https://docs.google.com/presentation/d/12mI2sKABnUw5RBWXDYY-HtHth4iMSNcEoQ10jDQbxgA/edit?usp=sharing).
- [2024/01] Added ROCm 6.0 support to vLLM.
- [2023/12] Added ROCm 5.7 support to vLLM.
- [2023/10] We hosted [the first vLLM meetup](https://lu.ma/first-vllm-meetup) in SF! Please find the meetup slides [here](https://docs.google.com/presentation/d/1QL-XPFXiFpDBh86DbEegFXBXFXjix4v032GhShbKf3s/edit?usp=sharing).
- [2023/09] We created our [Discord server](https://discord.gg/jz7wjKhh6g)! Join us to discuss vLLM and LLM serving! We will also post the latest announcements and updates there.
- [2023/09] We released our [PagedAttention paper](https://arxiv.org/abs/2309.06180) on arXiv!
- [2023/08] We would like to express our sincere gratitude to [Andreessen Horowitz](https://a16z.com/2023/08/30/supporting-the-open-source-ai-community/) (a16z) for providing a generous grant to support the open-source development and research of vLLM.
- [2023/07] Added support for LLaMA-2! You can run and serve 7B/13B/70B LLaMA-2s on vLLM with a single command!
- [2023/06] Serving vLLM On any Cloud with SkyPilot. Check out a 1-click [example](https://github.com/skypilot-org/skypilot/blob/master/llm/vllm) to start the vLLM demo, and the [blog post](https://blog.skypilot.co/serving-llm-24x-faster-on-the-cloud-with-vllm-and-skypilot/) for the story behind vLLM development on the clouds.
- [2023/06] We officially released vLLM! FastChat-vLLM integration has powered [LMSYS Vicuna and Chatbot Arena](https://chat.lmsys.org) since mid-April. Check out our [blog post](https://vllm.ai).---
## About
vLLM is a fast and easy-to-use library for LLM inference and serving.vLLM is fast with:
- State-of-the-art serving throughput
- Efficient management of attention key and value memory with **PagedAttention**
- Continuous batching of incoming requests
- Fast model execution with CUDA/HIP graph
- Quantization: [GPTQ](https://arxiv.org/abs/2210.17323), [AWQ](https://arxiv.org/abs/2306.00978), [SqueezeLLM](https://arxiv.org/abs/2306.07629), FP8 KV Cache
- Optimized CUDA kernelsvLLM is flexible and easy to use with:
- Seamless integration with popular Hugging Face models
- High-throughput serving with various decoding algorithms, including *parallel sampling*, *beam search*, and more
- Tensor parallelism support for distributed inference
- Streaming outputs
- OpenAI-compatible API server
- Support NVIDIA GPUs and AMD GPUs
- (Experimental) Prefix caching support
- (Experimental) Multi-lora supportvLLM seamlessly supports many Hugging Face models, including the following architectures:
- Aquila & Aquila2 (`BAAI/AquilaChat2-7B`, `BAAI/AquilaChat2-34B`, `BAAI/Aquila-7B`, `BAAI/AquilaChat-7B`, etc.)
- Baichuan & Baichuan2 (`baichuan-inc/Baichuan2-13B-Chat`, `baichuan-inc/Baichuan-7B`, etc.)
- BLOOM (`bigscience/bloom`, `bigscience/bloomz`, etc.)
- ChatGLM (`THUDM/chatglm2-6b`, `THUDM/chatglm3-6b`, etc.)
- DeciLM (`Deci/DeciLM-7B`, `Deci/DeciLM-7B-instruct`, etc.)
- Falcon (`tiiuae/falcon-7b`, `tiiuae/falcon-40b`, `tiiuae/falcon-rw-7b`, etc.)
- Gemma (`google/gemma-2b`, `google/gemma-7b`, etc.)
- GPT-2 (`gpt2`, `gpt2-xl`, etc.)
- GPT BigCode (`bigcode/starcoder`, `bigcode/gpt_bigcode-santacoder`, etc.)
- GPT-J (`EleutherAI/gpt-j-6b`, `nomic-ai/gpt4all-j`, etc.)
- GPT-NeoX (`EleutherAI/gpt-neox-20b`, `databricks/dolly-v2-12b`, `stabilityai/stablelm-tuned-alpha-7b`, etc.)
- InternLM (`internlm/internlm-7b`, `internlm/internlm-chat-7b`, etc.)
- InternLM2 (`internlm/internlm2-7b`, `internlm/internlm2-chat-7b`, etc.)
- Jais (`core42/jais-13b`, `core42/jais-13b-chat`, `core42/jais-30b-v3`, `core42/jais-30b-chat-v3`, etc.)
- LLaMA & LLaMA-2 (`meta-llama/Llama-2-70b-hf`, `lmsys/vicuna-13b-v1.3`, `young-geng/koala`, `openlm-research/open_llama_13b`, etc.)
- Mistral (`mistralai/Mistral-7B-v0.1`, `mistralai/Mistral-7B-Instruct-v0.1`, etc.)
- Mixtral (`mistralai/Mixtral-8x7B-v0.1`, `mistralai/Mixtral-8x7B-Instruct-v0.1`, etc.)
- MPT (`mosaicml/mpt-7b`, `mosaicml/mpt-30b`, etc.)
- OLMo (`allenai/OLMo-1B`, `allenai/OLMo-7B`, etc.)
- OPT (`facebook/opt-66b`, `facebook/opt-iml-max-30b`, etc.)
- Orion (`OrionStarAI/Orion-14B-Base`, `OrionStarAI/Orion-14B-Chat`, etc.)
- Phi (`microsoft/phi-1_5`, `microsoft/phi-2`, etc.)
- Qwen (`Qwen/Qwen-7B`, `Qwen/Qwen-7B-Chat`, etc.)
- Qwen2 (`Qwen/Qwen2-7B-beta`, `Qwen/Qwen-7B-Chat-beta`, etc.)
- StableLM(`stabilityai/stablelm-3b-4e1t`, `stabilityai/stablelm-base-alpha-7b-v2`, etc.)
- Starcoder2(`bigcode/starcoder2-3b`, `bigcode/starcoder2-7b`, `bigcode/starcoder2-15b`, etc.)
- Yi (`01-ai/Yi-6B`, `01-ai/Yi-34B`, etc.)Install vLLM with pip or [from source](https://vllm.readthedocs.io/en/latest/getting_started/installation.html#build-from-source):
```bash
pip install vllm
```## Getting Started
Visit our [documentation](https://vllm.readthedocs.io/en/latest/) to get started.
- [Installation](https://vllm.readthedocs.io/en/latest/getting_started/installation.html)
- [Quickstart](https://vllm.readthedocs.io/en/latest/getting_started/quickstart.html)
- [Supported Models](https://vllm.readthedocs.io/en/latest/models/supported_models.html)## Contributing
We welcome and value any contributions and collaborations.
Please check out [CONTRIBUTING.md](./CONTRIBUTING.md) for how to get involved.## Citation
If you use vLLM for your research, please cite our [paper](https://arxiv.org/abs/2309.06180):
```bibtex
@inproceedings{kwon2023efficient,
title={Efficient Memory Management for Large Language Model Serving with PagedAttention},
author={Woosuk Kwon and Zhuohan Li and Siyuan Zhuang and Ying Sheng and Lianmin Zheng and Cody Hao Yu and Joseph E. Gonzalez and Hao Zhang and Ion Stoica},
booktitle={Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles},
year={2023}
}
```