Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/llmapi-io/llmapi-server
Self-host llmapi server, make it really easy for accessing LLMs ! :rocket:
https://github.com/llmapi-io/llmapi-server
chatgpt dall-e embeddings ernie-bot gpt-3 gpt-4 gpt3 large-language-models llama openapi welm
Last synced: 24 days ago
JSON representation
Self-host llmapi server, make it really easy for accessing LLMs ! :rocket:
- Host: GitHub
- URL: https://github.com/llmapi-io/llmapi-server
- Owner: llmapi-io
- Created: 2023-03-14T08:35:41.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2023-04-07T15:12:40.000Z (over 1 year ago)
- Last Synced: 2024-10-18T00:20:53.064Z (29 days ago)
- Topics: chatgpt, dall-e, embeddings, ernie-bot, gpt-3, gpt-4, gpt3, large-language-models, llama, openapi, welm
- Language: Python
- Homepage: https://llmapi.io
- Size: 36.1 KB
- Stars: 36
- Watchers: 1
- Forks: 11
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
LLMApi Server
Self-host llmapi server
## Introdution
[中文文档](README.zh.md)
llmapi-server is an abstract backend that encapsulates a variety of large language models (LLM, such as ChatGPT, GPT-3, GPT-4, etc.), and provides simple access services through OpenAPI
:star2: If it is helpful to you,please star it :star2:
## Diagram
```mermaid
graph LRsubgraph llmapi server
OpenAPI --> session
OpenAPI --> pre_post
subgraph backend
style backend fill:#f9f
pre_post-->chatgpt
pre_post-->dall-e
pre_post-->llama
pre_post-->...
end
end
text-->OpenAPI
image-->OpenAPI
embedding-->OpenAPI
others--> OpenAPI```
## :sparkles: Supportted backends
- [x] `chatgpt`: openai's official ChatGPT interface
- [x] `gpt3`: openai's official GPT-3 interface
- [x] `gpt-embedding`: openai's official Embedding interface
- [x] `dall-e`: openai's official DALL·E interface
- [x] `welm`: wechat's llm interface
- [x] `newbing`: New Bing search based on ChatGPT(unofficial)### ⏳ WIP
- [ ] llama
- [ ] stable diffusion
- [ ] controlNet
- [ ] SAM(meta)## Install & Run
1. run locally
``` shell
# python >= 3.8
python3 -m pip install -r requirements.txtpython3 run_api_server.py
```2. run with docker
``` shell
./build_docker.sh
./start_docker.sh
```## Visit server
1. Use the `curl` command to access:
``` shell
# 1. Start a new session
curl -X POST -H "Content-Type: application/json" -d '{"bot_type":"mock"}' http://127.0.0.1:5050/v1/chat/start
# response sample: {"code":0,"msg":"Success","session":"123456"}# 2. chat with LLMs
curl -X POST -H "Content-Type: application/json" -d '{"session":"123456","content":"hello"}' http://127.0.0.1:5050/v1/chat/ask
# response sample: {"code":0,"msg":"Success","reply":"Text mock reply for your prompt:hello","timestamp":1678865301.0842562}# 3. Close the session and end chat
curl -X POST -H "Content-Type: application/json" -d '{"session":"123456"}' http://127.0.0.1:5050/v1/chat/end
# response: {"code":0,"msg":"Success"}
```2. Using command line tools:[llmapi_cli](https://github.com/llmapi-io/llmapi-cli)
``` shell
llmapi_cli --host="http://127.0.0.1:5050" --bot=mock
```3. Integrate in your python code with llmapi_cli module
``` python
from llmapi_cli import LLMClientclient = LLMClient(host = "http://127.0.0.1:5050", bot = "mock")
rep = client.ask("hello")
print(rep)
```## Plug into your LLM's backend !
1. You need to create a new backend name in the backend directory (assumed to be `newllm`), you can directly `cp -r mock newllm`
2. Referring to the implementation of `mock`, change the backend name to `newllm`
3. In the `newllm` directory, add the necessary dependencies, and all related development is bound to this directory
4. Add support for `newllm` in `backend.py`