https://github.com/Portkey-AI/portkey-python-sdk
Build reliable, secure, and production-ready AI apps easily.
https://github.com/Portkey-AI/portkey-python-sdk
ai-gateway llmops llms mlops portkey
Last synced: 3 months ago
JSON representation
Build reliable, secure, and production-ready AI apps easily.
- Host: GitHub
- URL: https://github.com/Portkey-AI/portkey-python-sdk
- Owner: Portkey-AI
- License: mit
- Created: 2023-08-28T06:49:48.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2024-11-21T04:30:12.000Z (11 months ago)
- Last Synced: 2024-11-21T05:24:26.778Z (11 months ago)
- Topics: ai-gateway, llmops, llms, mlops, portkey
- Language: Python
- Homepage: https://portkey.ai/docs
- Size: 6.46 MB
- Stars: 46
- Watchers: 4
- Forks: 17
- Open Issues: 8
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: .github/CODE_OF_CONDUCT.md
- Security: SECURITY.md
- Support: SUPPORT.md
Awesome Lists containing this project
README
## Control Panel for AI Apps
```bash
pip install portkey-ai
```## Features
The Portkey SDK is built on top of the OpenAI SDK, allowing you to seamlessly integrate Portkey's advanced features while retaining full compatibility with OpenAI methods. With Portkey, you can enhance your interactions with OpenAI or any other OpenAI-like provider by leveraging robust monitoring, reliability, prompt management, and more features - without modifying much of your existing code.
### AI Gateway
Unified API Signature
If you've used OpenAI, you already know how to use Portkey with any other provider.
Interoperability
Write once, run with any provider. Switch between any model from_any provider seamlessly.
Automated Fallbacks & Retries
Ensure your application remains functional even if a primary service fails.
Load Balancing
Efficiently distribute incoming requests among multiple models.
Semantic Caching
Reduce costs and latency by intelligently caching results.
Virtual Keys
Secure your LLM API keys by storing them in Portkey vault and using disposable virtual keys.
Request Timeouts
Manage unpredictable LLM latencies effectively by setting custom request timeouts on requests.
### Observability
Logging
Keep track of all requests for monitoring and debugging.
Requests Tracing
Understand the journey of each request for optimization.
Custom Metadata
Segment and categorize requests for better insights.
Feedbacks
Collect and analyse weighted feedback on requests from users.
Analytics
Track your app & LLM's performance with 40+ production-critical metrics in a single place.
## Usage
#### Prerequisites
1. [Sign up on Portkey](https://app.portkey.ai/) and grab your Portkey API Key
2. Add your [OpenAI key](https://platform.openai.com/api-keys) to Portkey's Virtual Keys page and keep it handy```bash
# Installing the SDK$ pip install portkey-ai
$ export PORTKEY_API_KEY=PORTKEY_API_KEY
```#### Making a Request to OpenAI
* Portkey fully adheres to the OpenAI SDK signature. You can instantly switch to Portkey and start using our production features right out of the box.
* Just replace `from openai import OpenAI` with `from portkey_ai import Portkey`:
```py
from portkey_ai import Portkeyportkey = Portkey(
api_key="PORTKEY_API_KEY",
virtual_key="VIRTUAL_KEY"
)chat_completion = portkey.chat.completions.create(
messages = [{ "role": 'user', "content": 'Say this is a test' }],
model = 'gpt-4'
)print(chat_completion)
```#### Async Usage
* Use `AsyncPortkey` instead of `Portkey` with `await`:
```py
import asyncio
from portkey_ai import AsyncPortkeyportkey = AsyncPortkey(
api_key="PORTKEY_API_KEY",
virtual_key="VIRTUAL_KEY"
)async def main():
chat_completion = await portkey.chat.completions.create(
messages=[{'role': 'user', 'content': 'Say this is a test'}],
model='gpt-4'
)print(chat_completion)
asyncio.run(main())
```## Compatibility with OpenAI SDK
Portkey currently supports all the OpenAI methods, including the legacy ones.
| Methods | OpenAI
V1.26.0 | Portkey
V1.3.1 |
|:----------------------------|:--------|:---------|
| [Audio](https://portkey.ai/docs/product/ai-gateway-streamline-llm-integrations/multimodal-capabilities/vision-1) | ✅ | ✅ |
| [Chat](https://portkey.ai/docs/api-reference/chat-completions) | ✅ | ✅ |
| [Embeddings](https://portkey.ai/docs/api-reference/embeddings) | ✅ | ✅ |
| [Images](https://portkey.ai/docs/api-reference/completions-1) | ✅ | ✅ |
| Fine-tuning | ✅ | ✅ |
| Batch | ✅ | ✅ |
| Files | ✅ | ✅ |
| Models | ✅ | ✅ |
| Moderations | ✅ | ✅ |
| Assistants | ✅ | ✅ |
| Threads | ✅ | ✅ |
| Thread - Messages | ✅ | ✅ |
| Thread - Runs | ✅ | ✅ |
| Thread - Run - Steps | ✅ | ✅ |
| Vector Store | ✅ | ✅ |
| Vector Store - Files | ✅ | ✅ |
| Vector Store - Files Batches | ✅ | ✅ |
| Generations | ❌ (Deprecated) | ✅ |
| Completions | ❌ (Deprecated) | ✅ |### Portkey-Specific Methods
| Methods | Portkey
V1.3.1 |
| :-- | :-- |
| [Feedback](https://portkey.ai/docs/api-reference/feedback) | ✅ |
| [Prompts](https://portkey.ai/docs/api-reference/prompts) | ✅ |---
#### [Check out Portkey docs for the full list of supported providers](https://portkey.ai/docs/welcome/what-is-portkey#ai-providers-supported)
#### Contributing
Get started by checking out Github issues. Email us at support@portkey.ai or just ping on Discord to chat.