https://github.com/yinheli/openai-relay
A Simple OpenAI API relay service built with Bun runtime, supporting various OpenAI-compatible backend services.
https://github.com/yinheli/openai-relay
openai-proxy openai-relay
Last synced: 2 months ago
JSON representation
A Simple OpenAI API relay service built with Bun runtime, supporting various OpenAI-compatible backend services.
- Host: GitHub
- URL: https://github.com/yinheli/openai-relay
- Owner: yinheli
- License: mit
- Created: 2024-10-28T15:25:02.000Z (7 months ago)
- Default Branch: master
- Last Pushed: 2025-03-18T03:10:26.000Z (3 months ago)
- Last Synced: 2025-03-18T04:22:44.085Z (3 months ago)
- Topics: openai-proxy, openai-relay
- Language: TypeScript
- Homepage:
- Size: 49.8 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
Awesome Lists containing this project
README
# OpenAI Relay


[](https://hub.docker.com/r/yinheli/openai-relay)


[](https://opensource.org/licenses/MIT)## Purpose
This project is a lightweight OpenAI API relay service that enables routing requests to different OpenAI-compatible
backend services based on model prefixes. It features seamless integration with Kubernetes through Helm charts for easy
deployment and scaling.## Features
- 🚀 Built with Deno runtime
- 🔄 Compatible with OpenAI API specification
- 🎮 Easy configuration through environment variables
- 🚢 Kubernetes deployment support with Helm charts## Installation
### Using Helm Charts
1. **Add the Helm repository**:
```bash
helm repo add openai-relay https://yinheli.github.io/openai-relay
```2. **Update your Helm repositories**:
```bash
helm repo update
```3. **Install the chart**:
```bash
helm install openai-relay openai-relay/openai-relay
```4. **Configuration**: You can customize the installation by providing a `values.yaml` file. Below are the default values
you can override:```yaml
replicaCount: 1image:
repository: yinheli/openai-relay
pullPolicy: IfNotPresent
tag: ""service:
type: ClusterIP
port: 80ingress:
enabled: false
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecificresources: {}
# environment variables
envVars:
RELAY_PROVIDER_SILICONCLOUD: https://api.siliconflow.cn
RELAY_MODEL_SILICONCLOUD: siliconcloud-gpt-4o,siliconcloud-gpt-4o-mini
RELAY_API_KEY_SILICONCLOUD: sk-siliconcloud-api-key
```5. **Accessing the Service**: Once installed, you can access the service through the specified service type and port.
For more detailed configuration options, refer to the `values.yaml` and `config.ts` file in the repository.
### Using Docker
You can run the OpenAI Relay service using Docker:
```bash
docker run -d \
--name openai-relay \
-e RELAY_PROVIDER_SILICONCLOUD=https://api.siliconflow.cn \
-e RELAY_MODEL_SILICONCLOUD=siliconcloud-gpt-4o,siliconcloud-gpt-4o-mini \
-e RELAY_API_KEY_SILICONCLOUD=sk-siliconcloud-api-key \
-p 80:80 \
yinheli/openai-relay:latest
```> [!NOTE]
> You can customize the environment variables to fit your needs. and pay attention to the docker image tag, you may need
> to replace it with the release tag.### Run directly with Deno
```bash
deno run -A --env-file=.env jsr:@yinheli/openai-relay
```## Contributors

## Contributing
We welcome contributions to improve the OpenAI Relay service. Please see our [CONTRIBUTING.md](CONTRIBUTING.md) for
guidelines on how to submit improvements and bug fixes.## License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.