Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
https://github.com/vllm-project/vllm
amd cuda gpt hpu inference inferentia llama llm llm-serving llmops mlops model-serving pytorch rocm tpu trainium transformer xpu
Last synced: 5 days ago
JSON representation
A high-throughput and memory-efficient inference and serving engine for LLMs
- Host: GitHub
- URL: https://github.com/vllm-project/vllm
- Owner: vllm-project
- License: apache-2.0
- Created: 2023-02-09T11:23:20.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2024-12-10T17:38:16.000Z (11 days ago)
- Last Synced: 2024-12-10T17:51:34.662Z (11 days ago)
- Topics: amd, cuda, gpt, hpu, inference, inferentia, llama, llm, llm-serving, llmops, mlops, model-serving, pytorch, rocm, tpu, trainium, transformer, xpu
- Language: Python
- Homepage: https://docs.vllm.ai
- Size: 25.2 MB
- Stars: 31,635
- Watchers: 254
- Forks: 4,809
- Open Issues: 1,660
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- Funding: .github/FUNDING.yml
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
- Codeowners: .github/CODEOWNERS
- Security: SECURITY.md
Awesome Lists containing this project
- awesome-local-llms - vllm - throughput and memory-efficient inference and serving engine for LLMs | 29,451 | 4,415 | 2,218 | 454 | 41 | Apache License 2.0 | 0 days, 8 hrs, 58 mins | (Open-Source Local LLM Projects)
- Awesome-LLM-Productization - vLLM - A high-throughput and memory-efficient inference and serving engine for LLMs (Models and Tools / LLM Deployment)
- awesome-local-ai - vLLM - vLLM is a fast and easy-to-use library for LLM inference and serving. | GGML/GGUF | Both | ❌ | Python | Text-Gen | (Inference Engine)
- awesome-genai - vllm-Efficient LLM Serving
- Awesome_Multimodel_LLM - vLLM - A high-throughput and memory-efficient inference and serving engine for LLMs (Tools for deploying LLM)
- awesome-llm - vLLM - 高吞吐、低内存消耗的LLM推理和服务引擎。 (LLM部署 / LLM 评估工具)
- awesome-llm - vLLM - 高吞吐、低内存消耗的LLM推理和服务引擎。 (LLM部署 / LLM 评估工具)
- awesome-llm-list - vLLM
- awesome-ml-python-packages - vLLM
- awesome-llmops - vllm - throughput and memory-efficient inference and serving engine for LLMs. | ![GitHub stars](https://img.shields.io/github/stars/vllm-project/vllm.svg?style=flat-square) | (Serving / Large Model Serving)
- awesome-repositories - vllm-project/vllm - A high-throughput and memory-efficient inference and serving engine for LLMs (Python)
- StarryDivineSky - vllm-project/vllm
- awesome-llm-projects - vLLM - throughput and memory-efficient inference and serving engine for LLMs. (Projects / 🤯 LLMs Inference and Serving)
- Awesome-LLM - vLLM - A high-throughput and memory-efficient inference and serving engine for LLMs. (LLM Deployment)
- Awesome-LLM-Compression - [Code
- awesome-production-machine-learning - vLLM - project/vllm.svg?style=social) - vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. (Deployment and Serving)
- awesome-LLM-resourses - vllm - throughput and memory-efficient inference and serving engine for LLMs. (推理 Inference)
- awesome-llm-and-aigc - vLLM - project/vllm?style=social"/> : A high-throughput and memory-efficient inference and serving engine for LLMs. [docs.vllm.ai](https://docs.vllm.ai/) (Summary)
- awesome-cuda-triton-hpc - vLLM - project/vllm?style=social"/> : A high-throughput and memory-efficient inference and serving engine for LLMs. [docs.vllm.ai](https://docs.vllm.ai/) (Frameworks)
- awesome-cuda-triton-hpc - vllm-project/vllm - project/vllm?style=social"/> : A high-throughput and memory-efficient inference and serving engine for LLMs. [vllm.readthedocs.io](https://vllm.readthedocs.io/en/latest/) (Frameworks)
- awesome - vllm-project/vllm - A high-throughput and memory-efficient inference and serving engine for LLMs (Python)
- Awesome-LLMs-on-device - [Github
- alan_awesome_llm - vllm - throughput and memory-efficient inference and serving engine for LLMs. (推理 Inference)
- alan_awesome_llm - vllm - throughput and memory-efficient inference and serving engine for LLMs. (推理 Inference)
- AiTreasureBox - vllm-project/vllm - 12-20_32193_24](https://img.shields.io/github/stars/vllm-project/vllm.svg)|A high-throughput and memory-efficient inference and serving engine for LLMs| (Repos)
- awesome - vllm-project/vllm - A high-throughput and memory-efficient inference and serving engine for LLMs (Python)
- awesome - vllm-project/vllm - A high-throughput and memory-efficient inference and serving engine for LLMs (Python)
README
Easy, fast, and cheap LLM serving for everyone
| Documentation | Blog | Paper | Discord | Twitter/X | Developer Slack |---
*Latest News* 🔥
- [2024/12] vLLM joins [pytorch ecosystem](https://pytorch.org/blog/vllm-joins-pytorch)! Easy, Fast, and Cheap LLM Serving for Everyone!
- [2024/11] We hosted [the seventh vLLM meetup](https://lu.ma/h0qvrajz) with Snowflake! Please find the meetup slides from vLLM team [here](https://docs.google.com/presentation/d/1e3CxQBV3JsfGp30SwyvS3eM_tW-ghOhJ9PAJGK6KR54/edit?usp=sharing), and Snowflake team [here](https://docs.google.com/presentation/d/1qF3RkDAbOULwz9WK5TOltt2fE9t6uIc_hVNLFAaQX6A/edit?usp=sharing).
- [2024/10] We have just created a developer slack ([slack.vllm.ai](https://slack.vllm.ai)) focusing on coordinating contributions and discussing features. Please feel free to join us there!
- [2024/10] Ray Summit 2024 held a special track for vLLM! Please find the opening talk slides from the vLLM team [here](https://docs.google.com/presentation/d/1B_KQxpHBTRa_mDF-tR6i8rWdOU5QoTZNcEg2MKZxEHM/edit?usp=sharing). Learn more from the [talks](https://www.youtube.com/playlist?list=PLzTswPQNepXl6AQwifuwUImLPFRVpksjR) from other vLLM contributors and users!
- [2024/09] We hosted [the sixth vLLM meetup](https://lu.ma/87q3nvnh) with NVIDIA! Please find the meetup slides [here](https://docs.google.com/presentation/d/1wrLGwytQfaOTd5wCGSPNhoaW3nq0E-9wqyP7ny93xRs/edit?usp=sharing).
- [2024/07] We hosted [the fifth vLLM meetup](https://lu.ma/lp0gyjqr) with AWS! Please find the meetup slides [here](https://docs.google.com/presentation/d/1RgUD8aCfcHocghoP3zmXzck9vX3RCI9yfUAB2Bbcl4Y/edit?usp=sharing).
- [2024/07] In partnership with Meta, vLLM officially supports Llama 3.1 with FP8 quantization and pipeline parallelism! Please check out our blog post [here](https://blog.vllm.ai/2024/07/23/llama31.html).
- [2024/06] We hosted [the fourth vLLM meetup](https://lu.ma/agivllm) with Cloudflare and BentoML! Please find the meetup slides [here](https://docs.google.com/presentation/d/1iJ8o7V2bQEi0BFEljLTwc5G1S10_Rhv3beed5oB0NJ4/edit?usp=sharing).
- [2024/04] We hosted [the third vLLM meetup](https://robloxandvllmmeetup2024.splashthat.com/) with Roblox! Please find the meetup slides [here](https://docs.google.com/presentation/d/1A--47JAK4BJ39t954HyTkvtfwn0fkqtsL8NGFuslReM/edit?usp=sharing).
- [2024/01] We hosted [the second vLLM meetup](https://lu.ma/ygxbpzhl) with IBM! Please find the meetup slides [here](https://docs.google.com/presentation/d/12mI2sKABnUw5RBWXDYY-HtHth4iMSNcEoQ10jDQbxgA/edit?usp=sharing).
- [2023/10] We hosted [the first vLLM meetup](https://lu.ma/first-vllm-meetup) with a16z! Please find the meetup slides [here](https://docs.google.com/presentation/d/1QL-XPFXiFpDBh86DbEegFXBXFXjix4v032GhShbKf3s/edit?usp=sharing).
- [2023/08] We would like to express our sincere gratitude to [Andreessen Horowitz](https://a16z.com/2023/08/30/supporting-the-open-source-ai-community/) (a16z) for providing a generous grant to support the open-source development and research of vLLM.
- [2023/06] We officially released vLLM! FastChat-vLLM integration has powered [LMSYS Vicuna and Chatbot Arena](https://chat.lmsys.org) since mid-April. Check out our [blog post](https://vllm.ai).---
## About
vLLM is a fast and easy-to-use library for LLM inference and serving.vLLM is fast with:
- State-of-the-art serving throughput
- Efficient management of attention key and value memory with **PagedAttention**
- Continuous batching of incoming requests
- Fast model execution with CUDA/HIP graph
- Quantizations: [GPTQ](https://arxiv.org/abs/2210.17323), [AWQ](https://arxiv.org/abs/2306.00978), INT4, INT8, and FP8.
- Optimized CUDA kernels, including integration with FlashAttention and FlashInfer.
- Speculative decoding
- Chunked prefill**Performance benchmark**: We include a performance benchmark at the end of [our blog post](https://blog.vllm.ai/2024/09/05/perf-update.html). It compares the performance of vLLM against other LLM serving engines ([TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM), [SGLang](https://github.com/sgl-project/sglang) and [LMDeploy](https://github.com/InternLM/lmdeploy)). The implementation is under [nightly-benchmarks folder](.buildkite/nightly-benchmarks/) and you can [reproduce](https://github.com/vllm-project/vllm/issues/8176) this benchmark using our one-click runnable script.
vLLM is flexible and easy to use with:
- Seamless integration with popular Hugging Face models
- High-throughput serving with various decoding algorithms, including *parallel sampling*, *beam search*, and more
- Tensor parallelism and pipeline parallelism support for distributed inference
- Streaming outputs
- OpenAI-compatible API server
- Support NVIDIA GPUs, AMD CPUs and GPUs, Intel CPUs and GPUs, PowerPC CPUs, TPU, and AWS Neuron.
- Prefix caching support
- Multi-lora supportvLLM seamlessly supports most popular open-source models on HuggingFace, including:
- Transformer-like LLMs (e.g., Llama)
- Mixture-of-Expert LLMs (e.g., Mixtral)
- Embedding Models (e.g. E5-Mistral)
- Multi-modal LLMs (e.g., LLaVA)Find the full list of supported models [here](https://docs.vllm.ai/en/latest/models/supported_models.html).
## Getting Started
Install vLLM with `pip` or [from source](https://vllm.readthedocs.io/en/latest/getting_started/installation.html#build-from-source):
```bash
pip install vllm
```Visit our [documentation](https://vllm.readthedocs.io/en/latest/) to learn more.
- [Installation](https://vllm.readthedocs.io/en/latest/getting_started/installation.html)
- [Quickstart](https://vllm.readthedocs.io/en/latest/getting_started/quickstart.html)
- [Supported Models](https://vllm.readthedocs.io/en/latest/models/supported_models.html)## Contributing
We welcome and value any contributions and collaborations.
Please check out [CONTRIBUTING.md](./CONTRIBUTING.md) for how to get involved.## Sponsors
vLLM is a community project. Our compute resources for development and testing are supported by the following organizations. Thank you for your support!
- a16z
- AMD
- Anyscale
- AWS
- Crusoe Cloud
- Databricks
- DeepInfra
- Dropbox
- Google Cloud
- Lambda Lab
- Nebius
- NVIDIA
- Replicate
- Roblox
- RunPod
- Sequoia Capital
- Skywork AI
- Trainy
- UC Berkeley
- UC San Diego
- ZhenFundWe also have an official fundraising venue through [OpenCollective](https://opencollective.com/vllm). We plan to use the fund to support the development, maintenance, and adoption of vLLM.
## Citation
If you use vLLM for your research, please cite our [paper](https://arxiv.org/abs/2309.06180):
```bibtex
@inproceedings{kwon2023efficient,
title={Efficient Memory Management for Large Language Model Serving with PagedAttention},
author={Woosuk Kwon and Zhuohan Li and Siyuan Zhuang and Ying Sheng and Lianmin Zheng and Cody Hao Yu and Joseph E. Gonzalez and Hao Zhang and Ion Stoica},
booktitle={Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles},
year={2023}
}
```## Contact Us
* For technical questions and feature requests, please use Github issues or discussions.
* For discussing with fellow users, please use Discord.
* For coordinating contributions and development, please use Slack.
* For security disclosures, please use Github's security advisory feature.
* For collaborations and partnerships, please contact us at vllm-questions AT lists.berkeley.edu.## Media Kit
* If you wish to use vLLM's logo, please refer to [our media kit repo](https://github.com/vllm-project/media-kit).