https://github.com/yezz123/llmgateway-py-sdk
Python SDK for llmgateway
https://github.com/yezz123/llmgateway-py-sdk
client gateway llm sdk-python
Last synced: 4 months ago
JSON representation
Python SDK for llmgateway
- Host: GitHub
- URL: https://github.com/yezz123/llmgateway-py-sdk
- Owner: yezz123
- License: mit
- Created: 2025-06-09T23:45:02.000Z (4 months ago)
- Default Branch: main
- Last Pushed: 2025-06-10T01:24:56.000Z (4 months ago)
- Last Synced: 2025-06-10T02:26:17.524Z (4 months ago)
- Topics: client, gateway, llm, sdk-python
- Language: Python
- Homepage: https://llmgateway.io/
- Size: 97.7 KB
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Funding: .github/FUNDING.yml
- License: LICENSE
Awesome Lists containing this project
README
# LLMGateway Python SDK

A Python SDK for interacting with the LLMGateway API.
---
| Project | Status |
| ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| CI | [](https://github.com/yezz123/llmgateway-py-sdk/actions/workflows/ci.yaml) [](https://github.com/yezz123/llmgateway-py-sdk/actions/workflows/pre-commit.yaml) [](https://codecov.io/gh/yezz123/llmgateway-py-sdk) |
| Meta | [](https://pypi.org/project/llmgateway-sdk) [](https://pydantic.dev) [](https://github.com/astral-sh/ruff) |---
## Installation
```bash
# using pip
pip install llmgateway-sdk# using uv
uv pip install llmgateway-sdk# using poetry
poetry add llmgateway-sdk
```## Usage
### Basic Usage
```python
from llmgateway import LLMGatewayClient, ChatCompletionRequest, Message# Initialize the client
client = LLMGatewayClient(api_key="your-api-key")# Create a chat completion request
request = ChatCompletionRequest(
model="gpt-4",
messages=[
Message(role="user", content="Hello!")
]
)# Get a completion
response = client.chat_completions(request)
print(response.message)# List available models
models = client.list_models()
for model in models.data:
print(f"Model: {model.name} (ID: {model.id})")
```### Async Usage
```python
import asyncio
from llmgateway import LLMGatewayClient, ChatCompletionRequest, Messageasync def main():
client = LLMGatewayClient(api_key="your-api-key")request = ChatCompletionRequest(
model="gpt-4",
messages=[
Message(role="user", content="Hello!")
]
)# Get a completion asynchronously
response = await client.achat_completions(request)
print(response.message)# List models asynchronously
models = await client.alist_models()
for model in models.data:
print(f"Model: {model.name} (ID: {model.id})")# Don't forget to close the client
await client.aclose()asyncio.run(main())
```### Streaming Responses
```python
from llmgateway import LLMGatewayClient, ChatCompletionRequest, Messageclient = LLMGatewayClient(api_key="your-api-key")
request = ChatCompletionRequest(
model="gpt-4",
messages=[
Message(role="user", content="Tell me a story")
],
stream=True
)# Get streaming responses
for response in client.chat_completions(request):
print(response.message, end="", flush=True)
```## Features
- Synchronous and asynchronous API support
- Streaming responses
- Type hints and validation using Pydantic
- Comprehensive test coverage
- Modern Python packaging with pyproject.toml## License
MIT License