An open API service indexing awesome lists of open source software.

https://github.com/Lightning-AI/LitServe

The easiest way to deploy agents, MCP servers, models, RAG, pipelines and more. No MLOps. No YAML.
https://github.com/Lightning-AI/LitServe

ai api artificial-intelligence deep-learning developer-tools fastapi rest-api serving web

Last synced: about 2 months ago
JSON representation

The easiest way to deploy agents, MCP servers, models, RAG, pipelines and more. No MLOps. No YAML.

Awesome Lists containing this project

README

          


The easiest way to deploy agents, MCP servers, RAG, pipelines, any model.


No MLOps. No YAML.

Lightning

 

Most serving engines serve one model with rigid abstractions. LitServe lets you serve any model (vision, audio, text) and build full AI systems - agents, chatbots, MCP servers, RAG, pipelines - with full control, batching, multi-GPU, streaming, custom logic, multi-model support, and zero YAML.

Self host or deploy in one-click to [Lightning AI](https://lightning.ai/).

 




✅ Build full AI systems ✅ 2× faster than FastAPI ✅ Agents, RAG, pipelines, more
✅ Custom logic + control ✅ Any PyTorch model ✅ Self-host or managed
✅ Multi-GPU autoscaling ✅ Batching + streaming ✅ BYO model or vLLM
✅ No MLOps glue code ✅ Easy setup in Python ✅ Serverless support

[![PyPI Downloads](https://static.pepy.tech/badge/litserve)](https://pepy.tech/projects/litserve)
[![Discord](https://img.shields.io/discord/1077906959069626439?label=Get%20help%20on%20Discord)](https://discord.gg/WajDThKAur)
![cpu-tests](https://github.com/Lightning-AI/litserve/actions/workflows/ci-testing.yml/badge.svg)
[![codecov](https://codecov.io/gh/Lightning-AI/litserve/graph/badge.svg?token=SmzX8mnKlA)](https://codecov.io/gh/Lightning-AI/litserve)
[![license](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/Lightning-AI/litserve/blob/main/LICENSE)





Quick start
Examples
Features
Performance
Hosting
Docs

 



Get started

 

# Quick start

Install LitServe via pip ([more options](https://lightning.ai/docs/litserve/home/install)):

```bash
pip install litserve
```

[Example 1](#inference-pipeline-example): Toy inference pipeline with multiple models.
[Example 2](#agent-example): Minimal agent to fetch the news (with OpenAI API).
([Advanced examples](#featured-examples)):

### Inference pipeline example

```python
import litserve as ls

# define the api to include any number of models, dbs, etc...
class InferencePipeline(ls.LitAPI):
def setup(self, device):
self.model1 = lambda x: x**2
self.model2 = lambda x: x**3

def predict(self, request):
x = request["input"]
# perform calculations using both models
a = self.model1(x)
b = self.model2(x)
c = a + b
return {"output": c}

if __name__ == "__main__":
# 12+ features like batching, streaming, etc...
server = ls.LitServer(InferencePipeline(max_batch_size=1), accelerator="auto")
server.run(port=8000)
```

Deploy for free to [Lightning cloud](#hosting-options) (or self host anywhere):

```bash
# Deploy for free with autoscaling, monitoring, etc...
lightning deploy server.py --cloud

# Or run locally (self host anywhere)
lightning deploy server.py
# python server.py
```

Test the server: Simulate an http request (run this on any terminal):
```bash
curl -X POST http://127.0.0.1:8000/predict -H "Content-Type: application/json" -d '{"input": 4.0}'
```

### Agent example

```python
import re, requests, openai
import litserve as ls

class NewsAgent(ls.LitAPI):
def setup(self, device):
self.openai_client = openai.OpenAI(api_key="OPENAI_API_KEY")

def predict(self, request):
website_url = request.get("website_url", "https://text.npr.org/")
website_text = re.sub(r'<[^>]+>', ' ', requests.get(website_url).text)

# ask the LLM to tell you about the news
llm_response = self.openai_client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": f"Based on this, what is the latest: {website_text}"}],
)
output = llm_response.choices[0].message.content.strip()
return {"output": output}

if __name__ == "__main__":
server = ls.LitServer(NewsAgent())
server.run(port=8000)
```
Test it:
```bash
curl -X POST http://127.0.0.1:8000/predict -H "Content-Type: application/json" -d '{"website_url": "https://text.npr.org/"}'
```

 

# Key benefits

A few key benefits:

- **Deploy any pipeline or model**: Agents, pipelines, RAG, chatbots, image models, video, speech, text, etc...
- **No MLOps glue:** LitAPI lets you build full AI systems (multi-model, agent, RAG) in one place ([more](https://lightning.ai/docs/litserve/api-reference/litapi)).
- **Instant setup:** Connect models, DBs, and data in a few lines with `setup()` ([more](https://lightning.ai/docs/litserve/api-reference/litapi#setup)).
- **Optimized:** autoscaling, GPU support, and fast inference included ([more](https://lightning.ai/docs/litserve/api-reference/litserver)).
- **Deploy anywhere:** self-host or one-click deploy with Lightning ([more](https://lightning.ai/docs/litserve/features/deploy-on-cloud)).
- **FastAPI for AI:** Built on FastAPI but optimized for AI - 2× faster with AI-specific multi-worker handling ([more]((#performance))).
- **Expert-friendly:** Use vLLM, or build your own with full control over batching, caching, and logic ([more](https://lightning.ai/lightning-ai/studios/deploy-a-private-llama-3-2-rag-api)).

> ⚠️ Not a vLLM or Ollama alternative out of the box. LitServe gives you lower-level flexibility to build what they do (and more) if you need it.

 

# Featured examples
Here are examples of inference pipelines for common model types and use cases.


Toy model: Hello world
LLMs: Llama 3.2, LLM Proxy server, Agent with tool use
RAG: vLLM RAG (Llama 3.2), RAG API (LlamaIndex)
NLP: Hugging face, BERT, Text embedding API
Multimodal: OpenAI Clip, MiniCPM, Phi-3.5 Vision Instruct, Qwen2-VL, Pixtral
Audio: Whisper, AudioCraft, StableAudio, Noise cancellation (DeepFilterNet)
Vision: Stable diffusion 2, AuraFlow, Flux, Image Super Resolution (Aura SR),
Background Removal, Control Stable Diffusion (ControlNet)
Speech: Text-speech (XTTS V2), Parler-TTS
Classical ML: Random forest, XGBoost
Miscellaneous: Media conversion API (ffmpeg), PyTorch + TensorFlow in one API, LLM proxy server

[Browse 100+ community-built templates](https://lightning.ai/studios?section=serving)

 

# Host anywhere

Self-host with full control, or deploy with [Lightning AI](https://lightning.ai/) in seconds with autoscaling, security, and 99.995% uptime.
**Free tier included. No setup required. Run on your cloud**

```bash
lightning deploy server.py --cloud
```

https://github.com/user-attachments/assets/ff83dab9-0c9f-4453-8dcb-fb9526726344

 

# Features

| [Feature](https://lightning.ai/docs/litserve/features) | Self Managed | [Fully Managed on Lightning](https://lightning.ai/deploy) |
|----------------------------------------------------------------------|-----------------------------------|------------------------------------|
| Docker-first deployment | ✅ DIY | ✅ One-click deploy |
| Cost | ✅ Free (DIY) | ✅ Generous [free tier](https://lightning.ai/pricing) with pay as you go |
| Full control | ✅ | ✅ |
| Use any engine (vLLM, etc.) | ✅ | ✅ vLLM, Ollama, LitServe, etc. |
| Own VPC | ✅ (manual setup) | ✅ Connect your own VPC |
| [(2x)+ faster than plain FastAPI](#performance) | ✅ | ✅ |
| [Bring your own model](https://lightning.ai/docs/litserve/features/full-control) | ✅ | ✅ |
| [Build compound systems (1+ models)](https://lightning.ai/docs/litserve/home) | ✅ | ✅ |
| [GPU autoscaling](https://lightning.ai/docs/litserve/features/gpu-inference) | ✅ | ✅ |
| [Batching](https://lightning.ai/docs/litserve/features/batching) | ✅ | ✅ |
| [Streaming](https://lightning.ai/docs/litserve/features/streaming) | ✅ | ✅ |
| [Worker autoscaling](https://lightning.ai/docs/litserve/features/autoscaling) | ✅ | ✅ |
| [Serve all models: (LLMs, vision, etc.)](https://lightning.ai/docs/litserve/examples) | ✅ | ✅ |
| [Supports PyTorch, JAX, TF, etc...](https://lightning.ai/docs/litserve/features/full-control) | ✅ | ✅ |
| [OpenAPI compliant](https://www.openapis.org/) | ✅ | ✅ |
| [Open AI compatibility](https://lightning.ai/docs/litserve/features/open-ai-spec) | ✅ | ✅ |
| [MCP server support](https://lightning.ai/docs/litserve/features/mcp) | ✅ | ✅ |
| [Asynchronous](https://lightning.ai/docs/litserve/features/async-concurrency) | ✅ | ✅ |
| [Authentication](https://lightning.ai/docs/litserve/features/authentication) | ❌ DIY | ✅ Token, password, custom |
| GPUs | ❌ DIY | ✅ 8+ GPU types, H100s from $1.75 |
| Load balancing | ❌ | ✅ Built-in |
| Scale to zero (serverless) | ❌ | ✅ No machine runs when idle |
| Autoscale up on demand | ❌ | ✅ Auto scale up/down |
| Multi-node inference | ❌ | ✅ Distribute across nodes |
| Use AWS/GCP credits | ❌ | ✅ Use existing cloud commits |
| Versioning | ❌ | ✅ Make and roll back releases |
| Enterprise-grade uptime (99.95%) | ❌ | ✅ SLA-backed |
| SOC2 / HIPAA compliance | ❌ | ✅ Certified & secure |
| Observability | ❌ | ✅ Built-in, connect 3rd party tools|
| CI/CD ready | ❌ | ✅ Lightning SDK |
| 24/7 enterprise support | ❌ | ✅ Dedicated support |
| Cost controls & audit logs | ❌ | ✅ Budgets, breakdowns, logs |
| Debug on GPUs | ❌ | ✅ Studio integration |
| [20+ features](https://lightning.ai/docs/litserve/features) | - | - |

 

# Performance
LitServe is designed for AI workloads. Specialized multi-worker handling delivers a minimum **2x speedup over FastAPI**.

Additional features like batching and GPU autoscaling can drive performance well beyond 2x, scaling efficiently to handle more simultaneous requests than FastAPI and TorchServe.

Reproduce the full benchmarks [here](https://lightning.ai/docs/litserve/home/benchmarks) (higher is better).


LitServe

These results are for image and text classification ML tasks. The performance relationships hold for other ML tasks (embedding, LLM serving, audio, segmentation, object detection, summarization etc...).

***💡 Note on LLM serving:*** For high-performance LLM serving (like Ollama/vLLM), integrate [vLLM with LitServe](https://lightning.ai/lightning-ai/studios/deploy-a-private-llama-3-2-rag-api), use [LitGPT](https://github.com/Lightning-AI/litgpt?tab=readme-ov-file#deploy-an-llm), or build your custom vLLM-like server with LitServe. Optimizations like kv-caching, which can be done with LitServe, are needed to maximize LLM performance.

 

# Community
LitServe is a [community project accepting contributions](https://lightning.ai/docs/litserve/community) - Let's make the world's most advanced AI inference engine.

💬 [Get help on Discord](https://discord.com/invite/XncpTy7DSt)
📋 [License: Apache 2.0](https://github.com/Lightning-AI/litserve/blob/main/LICENSE)