https://github.com/lightning-ai/litai
The easiest way to use any AI model. Build chatbots, agents, and AI apps that just work - with auto-retries, fallback, and unified billing built in.
https://github.com/lightning-ai/litai
agents ai ai-gateway api chatbot llm llm-gateway llm-inference openai openai-proxy
Last synced: 2 months ago
JSON representation
The easiest way to use any AI model. Build chatbots, agents, and AI apps that just work - with auto-retries, fallback, and unified billing built in.
- Host: GitHub
- URL: https://github.com/lightning-ai/litai
- Owner: Lightning-AI
- License: apache-2.0
- Created: 2025-07-23T18:04:31.000Z (2 months ago)
- Default Branch: main
- Last Pushed: 2025-07-26T17:16:30.000Z (2 months ago)
- Last Synced: 2025-07-26T20:46:58.659Z (2 months ago)
- Topics: agents, ai, ai-gateway, api, chatbot, llm, llm-gateway, llm-inference, openai, openai-proxy
- Language: Python
- Homepage: https://lightning.ai/docs/litai/home
- Size: 87.9 KB
- Stars: 2
- Watchers: 0
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Codeowners: .github/CODEOWNERS
Awesome Lists containing this project
README
Chat with any AI model with one line of Python.
Build agents, chatbots, and apps that just work with no downtime.
LitAI is the easiest way to chat with any model (ChatGPT, Anthropic, etc) in one line of Python. LitAI handles retries, fallback, billing, and logging - so you can build agents, chatbots, or apps without managing flaky APIs or writing wrapper code.
✅ Use any AI model (OpenAI, etc.) ✅ 20+ public models ✅ Bring your model API keys
✅ Unified usage dashboard ✅ No subscription ✅ Auto retries and fallback
✅ Deploy dedicated models on-prem ✅ Start instantly ✅ No MLOps glue code[](https://pepy.tech/projects/litai)
[](https://discord.gg/WajDThKAur)

[](https://codecov.io/gh/Lightning-AI/litai)
[](https://github.com/Lightning-AI/litai/blob/main/LICENSE)
Quick start •
Features •
Examples •
Performance •
FAQ •
Docs______________________________________________________________________
# Quick Start
Install LitAI via pip ([more options](https://lightning.ai/docs/litai/home/install)):
```bash
pip install litai
```
Add AI to any Python program in 3 lines:```python
from litai import LLMllm = LLM(model="openai/gpt-4")
answer = llm.chat("who are you?")
print(answer)# I'm an AI by OpenAI
```# Examples
What we ***love*** about LitAI is that you can build agents, chatbots and apps in plain Python - no heavy frameworks or magic. Agents can be just simple Python programs with a few decisions made by a model.### Agent
Here's a simple agent that tells you the latest news```python
import re, requests
from litai import LLMllm = LLM(model="openai/gpt-4o")
website_url = "https://text.npr.org/"
website_text = re.sub(r'<[^>]+>', ' ', requests.get(website_url).text)response = llm.chat(f"Based on this, what is the latest: {website_text}")
print(response)
```### Agentic if statement
We believe the best way to build agents is with normal Python programs and simple **“agentic if statements.”**
That way, 90% of the logic stays deterministic, and the model only steps in when needed. No complex abstractions, no framework magic - just code you can trust and debug.```python
from litai import LLMllm = LLM(model="openai/gpt-3.5-turbo")
product_review = "This TV has terrible picture quality and the sound cuts out constantly."
response = llm.chat(f"Is this review good or bad? Reply only with 'good' or 'bad': {product_review}").strip().lower()if response == "good":
print("good review")
else:
print("bad review")
```
# Key features
Track usage and spending in your [Lightning AI](https://lightning.ai/) dashboard. Model calls are paid for with Lightning AI credits.
✅ No subscription ✅ 15 free credits (~37M tokens) ✅ Pay as you go for more credits![]()
✅ [Use over 20+ models (ChatGPT, Claude, etc...)](https://lightning.ai/)
✅ [Monitor all usage in one place](https://lightning.ai/model-apis)
✅ [Async support](https://lightning.ai/docs/litai/features/async-litai/)
✅ [Auto retries on failure](https://lightning.ai/docs/litai/features/fallback-retry/)
✅ [Auto model switch on failure](https://lightning.ai/docs/litai/features/fallback-retry/)
✅ [Switch models](https://lightning.ai/docs/litai/features/models/)
✅ [Multi-turn conversation logs](https://lightning.ai/docs/litai/features/multi-turn-conversation/)
✅ [Streaming](https://lightning.ai/docs/litai/features/streaming/)
✅ Bring your own model (connect your API keys, coming soon...)
✅ Chat logs (coming soon...)
# Advanced features
### Auto fallbacks and retries
Model APIs can flake or can have outages. LitAI automatically retries in case of failures. After multiple failures it can automatically fallback to other models in case the provider is down.
```python
from litai import LLMllm = LLM(
model="openai/gpt-4",
fallback_models=["google/gemini-2.5-flash", "anthropic/claude-3-5-sonnet-20240620"],
max_retries=4,
)print(llm.chat("What is a fun fact about space?"))
```Streaming
Real-time chat applications benefit from showing words as they generate which gives the illusion of faster speed to the user. Streaming
is the mechanism that allows you to do this.```python
from litai import LLMllm = LLM(model="openai/gpt-4")
for chunk in llm.chat("hello", stream=True):
print(chunk, end="", flush=True)
````Use your own client (like OpenAI)
For those who already have their own SDK to call LLMs (like the OpenAI sdk), you can still use LitAI via the `https://lightning.ai/api/v1` endpoint,
which will track usage, billing, etc...```python
from openai import OpenAIclient = OpenAI(
base_url="https://lightning.ai/api/v1",
api_key="LIGHTNING_API_KEY",
)completion = client.chat.completions.create(
model="openai/gpt-4o",
messages=[
{
"role": "user",
"content": "What is a fun fact about space?"
}
]
)print(completion.choices[0].message.content)
```Concurrency with async
Advanced Python programs that process multiple requests at once rely on "async" to do this. LitAI can work with async libraries without blocking calls. This is especially useful in high-throughput applications like chatbots, APIs, or agent loops.
To enable async behavior, set `enable_async=True` when initializing the `LLM` class. Then use `await llm.chat(...)` inside an `async` function.
```python
import asyncio
from litai import LLMasync def main():
llm = LLM(model="openai/gpt-4", teamspace="lightning-ai/litai", enable_async=True)
print(await llm.chat("who are you?"))if __name__ == "__main__":
asyncio.run(main())
```Multi-turn conversations
Models only know the message that was sent to them. To enable them to respond with memory of all the messages sent to it so far, track the related
message under the same conversation. This is useful for assistants, summarizers, or research tools that need multi-turn chat history.Each conversation is identified by a unique name. LitAI stores conversation history separately for each name.
```python
from litai import LLMllm = LLM(model="openai/gpt-4")
# Continue a conversation across multiple turns
llm.chat("What is Lightning AI?", conversation="intro")
llm.chat("What can it do?", conversation="intro")print(llm.get_history("intro")) # View all messages from the 'intro' thread
llm.reset_conversation("intro") # Clear conversation history
```Create multiple named conversations for different tasks.
```python
from litai import LLMllm = LLM(model="openai/gpt-4")
llm.chat("Summarize this text", conversation="summarizer")
llm.chat("What's a RAG pipeline?", conversation="research")print(llm.list_conversations())
```Switch models on each call
In certain applications you may want to call ChatGPT in one message and Anthropic in another so you can use the best model for each task.
LitAI lets you dynamically switch models at request time.Set a default model when initializing `LLM` and override it with the `model` parameter only when needed.
```python
from litai import LLMllm = LLM(model="openai/gpt-4")
# Uses the default model (openai/gpt-4)
print(llm.chat("Who created you?"))
# >> I am a large language model, trained by OpenAI.# Override the default model for this request
print(llm.chat("Who created you?", model="google/gemini-2.5-flash"))
# >> I am a large language model, trained by Google.# Uses the default model again
print(llm.chat("Who created you?"))
# >> I am a large language model, trained by OpenAI.
```Multiple models, same conversation
One application of LitAI is to reduce costs of chats by using separate models for the same conversation. For example, use a cheap model to answer
the first question and a more expensive model for something that requires more intelligence.```python
from litai import LLMllm = LLM(model="openai/gpt-4")
# use a cheap model for this question
llm.chat("Is this a number or word: '5'", model="google/gemini-2.5-flash", conversation="story")# go back to the expensive model
llm.chat("Create a story about that number like Lord of the Rings", conversation="story")print(llm.get_history("story")) # View all messages from the 'story' thread
```
# Performance
LitAI does smart routing across a global network of servers. As a result, it only adds 25ms of overhead for an API call.
# FAQ
Do I need a subscription to use LitAI? (Nope)
Nope. You can start instantly without a subscription. LitAI is pay-as-you-go and lets you use your own model API keys (like OpenAI, Anthropic, etc.).Do I need an OpenAI account? (Nope)
Nope. You get access to all models and all model providers without a subscription.
What happens if a model API fails or goes down?
LitAI automatically retries the same model and can fall back to other models you specify. You’ll get the best chance of getting a response, even during outages.
Can I bring my own API keys for OpenAI, Anthropic, etc.? (Yes)
Yes. You can plug in your own keys to any OpenAI compatible API
Can I connect private models? (Yes)
Yes. You can connect any endpoint that supports the OpenAI spec.
Can you deploy a dedicated, private model like Llama for me? (Yes)
Yes. We can deploy dedicated models on any cloud (Lambda, AWS, etc).
Can you deploy models on-prem? (Yes)
Yes. We can deploy on any dedicated VPC on the cloud or your own physical data center.
Do deployed models support Kubernetes? (Yes)
Yes. We can use the Lightning AI orchestrator custom built for AI or Kubernetes, whatever you want!
How do I pay for the model APIs?
Buy Lightning AI credits on Lightning to pay for the APIs.
Do you add fees?
At this moment we don't add fees on top of the API calls, but that might change in the future.
Are you SOC2, HIPAA compliant? (Yes)
LitAI is built by Lightning AI. Our enterprise AI platform powers teams all the way from Fortune 100 to startups. Our platform is fully SOC2, HIPAA compliant.
# Community
LitAI is a [community project accepting contributions](https://lightning.ai/docs/litai/community) - Let's make the world's most advanced AI routing engine.💬 [Get help on Discord](https://discord.com/invite/XncpTy7DSt)
📋 [License: Apache 2.0](https://github.com/Lightning-AI/litAI/blob/main/LICENSE)