Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/mj23978/openserver
Open Server is an OpenAI API Compatible Server for generating text, images, embeddings, and storing them in vector databases. It also includes a chat functionality.
https://github.com/mj23978/openserver
autogen g4f image-generation langchain litellm llamacpp llm llmops openai stable vector-database whisper
Last synced: about 1 month ago
JSON representation
Open Server is an OpenAI API Compatible Server for generating text, images, embeddings, and storing them in vector databases. It also includes a chat functionality.
- Host: GitHub
- URL: https://github.com/mj23978/openserver
- Owner: Mj23978
- License: mit
- Created: 2023-11-11T00:32:31.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2023-12-08T23:51:11.000Z (about 1 year ago)
- Last Synced: 2023-12-09T00:56:28.688Z (about 1 year ago)
- Topics: autogen, g4f, image-generation, langchain, litellm, llamacpp, llm, llmops, openai, stable, vector-database, whisper
- Language: Python
- Homepage:
- Size: 1.97 MB
- Stars: 9
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Open Server π
![Hero Image](./docs/hero.jpg)
## Table of Contents
- [Open Server π](#open-server-)
- [Table of Contents](#table-of-contents)
- [Overview π](#overview-)
- [Updates β¨π§¨](#updates-)
- [Roadmap πΊοΈ](#roadmap-οΈ)
- [How to Install \& Run βοΈ](#how-to-install--run-οΈ)
- [Features β¨](#features-)
- [Server Configurations π οΈ](#server-configurations-οΈ)
- [All configs are stored in `configs/` folder.](#all-configs-are-stored-in--configs--folder)
- [Settings and API Keys](#settings-and-api-keys)
- [Models Configs](#models-configs)
- [Supported Provider](#supported-provider)
- [Completions and Chat Models](#completions-and-chat-models)
- [Image Models](#image-models)
- [Embeddings](#embeddings)
- [Vector Databases](#vector-databases)
- [Contributing](#contributing)## Overview π
Open Server is my attempt to recreate an **OpenAI Compatible Server** for generating text, images, embeddings, and storing them in vector databases. It also includes a chat functionality.
The server's request and responses are very similar to OpenAI's API with additional fields needed for different providers. It uses **Langchain** for the LLM part (Robust and powerful with callbacks) and provider SDKs for image generation and more.
## Updates β¨π§¨
- 9 December 2023 : Added **OpenRouter** and **NeuroAPI** to providers as config based on OpenAI provider. Added cost calculation to config and routes in usage.
## Roadmap πΊοΈ
- [ ] Python SDK
- [ ] Docker Compose
- [ ] ( **Soon** ) Example Next.JS Front working with Open Server
- [ ] Monitoring for LLM generation (LLM Monitor & Prompt Layer)
- [ ] ( **Soon** ) Audio Translation & Generation & Transcription APIs (Local with Whisper, ElevenLabs)### How to Install & Run βοΈ
To install the required packages:
```
pip install -r requirements.txt
```To run the server:
```
python -m openserver.main
```**Note**: For GPU support, you need to manually install **pytorch** and **llama-cpp-python**, based on your GPU type (CUDA & ROCM).
### Features β¨
This section lists the key features implemented in Open Server:
- #### Chat & Completions π¬
- Streaming
- Function Calling
- #### Image
- Text-to-Image## Server Configurations π οΈ
#### All configs are stored in `configs/` folder.
### Settings and API Keys
This file is used to store API keys, URLs, and other similar information. It has a YAML structure and can be used to configure various aspects of the server.
Example `config.yaml`:
```yaml
OPENAI_API_KEY: YOUR_OPEN_API_KEY
PALM_API_KEY: YOUR_PALM_API_KEYDB_NAME: test
DB_HOST: localhost
DB_USERNAME: admin
DB_PASSWORD: admin# Add more configuration options as needed...
```### Models Configs
These configurations are stored in separate files for better organization and modularity. Each config file follows the YAML structure.
Example LLM Config (`llm_config.yaml`):
```yaml
chat_providers:
palm:
name: palm
models:
- "models/text-bison-001"
available: true
# Add more LLM configs as needed...
```Example Image Config (`image_config.yaml`):
```yaml
image_models:
novita:
name: novita
models:
- "dynavisionXLAllInOneStylized_release0534bakedvae_129001.safetensors"
available: true
api_key: true
api_key_name: NOVITA_API_KEY# Add more image configs as needed...
```Example Prompt Config (`prompts_config.yaml`):
```yaml
prompts:
function_call:
name: "function_call"
file: "/prompts/function_call.txt"# Add more prompt configs as needed...
```Example Vector Database Config (`vectordb_config.yaml`):
```yaml
embeddings:
cohere:
name: cohere
models:
- "embed-english-light-v2.0"
- "embed-english-v2.0"
available: true
api_key: truevectordbs:
chromadb:
available: true
# Add more vector database configs as needed...
```Feel free to modify and extend these configurations according to your specific needs.
## Supported Provider
### Completions and Chat Models
| | Provider | Completion | Chat | Function Calling | Streaming | |
|:-:|:----------------:|:-------------:|:-------------:|:----------------:|:-------------:|:-:|
| | ------------- | ------------- | ------------- | ------------- | ------------- | |
| | openai | β | β | β | β | |
| | cohere | β | β | | β | |
| | huggingface | β | | | β | |
| | together-ai | β | β | β * | β | |
| | google-palm | β | β | β | β | |
| | ai21 | β | | β | β | |
| | fireworks | β | β | β * | β | |
| | llama-cpp-python | β | | β | β | |* **Some models** of provider support Function Calling.
### Image Models
| | Provider | Txt2Img | Img2Img | Upscale | |
|:-:|:-------------:|:-------------:|:-------------:|:-------------:|:-:|
| | ------------- | ------------- | ------------- | ------------- | |
| | openai | β | β | | |
| | together-ai | β | | | |
| | novita | β | β | β | |
| | segmind | β | β | | |### Embeddings
palm, huggingface, openai, gradient, cohere
### Vector Databases
chromadb, lancedb, milvus, pinecone, qdrant,redis, weaviate
## Contributing
To contribute: Clone the repo locally -> Make a change -> Submit a PR with the change.Here's how to modify the repo locally:
Step 1: Clone the repo
```
git clone https://github.com/mj23978/openserver.git
```Step 2: Navigate into the project, and install dependencies:
```
cd openserver
pip install -r requirements.txt
```Step 3: Submit a PR with your changes! π
- push your fork to your GitHub repo
- submit a PR from there