Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/parisneo/lollms_server_proxy
A proxy to use lollms as a multi user remote server for text/image/audio/video generation
https://github.com/parisneo/lollms_server_proxy
Last synced: 4 days ago
JSON representation
A proxy to use lollms as a multi user remote server for text/image/audio/video generation
- Host: GitHub
- URL: https://github.com/parisneo/lollms_server_proxy
- Owner: ParisNeo
- License: apache-2.0
- Created: 2024-02-05T21:07:01.000Z (11 months ago)
- Default Branch: main
- Last Pushed: 2024-02-06T00:11:17.000Z (11 months ago)
- Last Synced: 2024-10-31T11:29:03.547Z (about 2 months ago)
- Language: Python
- Size: 13.7 KB
- Stars: 3
- Watchers: 3
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# LoLLMS Proxy Server
LoLLMS Proxy Server is a lightweight reverse proxy server designed for load balancing and rate limiting. It is licensed under the Apache 2.0 license and can be installed using pip. This README covers setting up, installing, and using the LoLLMS Proxy Server.
## Prerequisites
Make sure you have Python (>=3.8) and Apache installed on your system before proceeding.## Installation
1. Clone or download the `lollms_server_proxy` repository from GitHub: https://github.com/ParisNeo/lollms_server_proxy
2. Navigate to the cloned directory in the terminal and run `pip install -e .`## Installation using Dockerfile
1. Clone this repository as described above.
2. Build your Container-Image with the Dockerfile provided by this repository### Podman
`cd lollms_server_proxy`
`podman build -t lollms_server_proxy:latest .`### Docker
`cd lollms_server_proxy`
`docker build -t lollms_server_proxy:latest .`## Configuration
### Servers configuration (config.ini)
Create a file named `config.ini` in the same directory as your script, containing server configurations:
```makefile
[DefaultServer]
url = http://localhost:11434
queue_size = 5[SecondaryServer]
url = http://localhost:3002
queue_size = 3# Add as many servers as needed, in the same format as [DefaultServer] and [SecondaryServer].
```
Replace `http://localhost:11434/` with the URL and port of the first server. The `queue_size` value indicates the maximum number of requests that can be queued at a given time for this server.### Authorized users (authorized_users.txt)
Create a file named `authorized_users.txt` in the same directory as your script, containing a list of user:key pairs, separated by commas and each on a new line:
```text
user1:key1
user2:key2
```
Replace `user1`, `key1`, `user2`, and `key2` with the desired username and API key for each user.
You can also use the `ollama_proxy_add_user` utility to add user and generate a key automatically:
```makefile
ollama_proxy_add_user --users_list [path to the authorized `authorized_users.txt` file]
```## Usage
### Starting the server
Start the Ollama Proxy Server by running the following command in your terminal:
```bash
lollms_server_proxy --config [configuration file path] --users_list [users list file path] --port [port number to access the proxy]
```
The server will listen on port 808x, with x being the number of available ports starting from 0 (e.g., 8080, 8081, etc.). The first available port will be automatically selected if no other instance is running.### Client requests
To send a request to the server, use the following command:
```bash
curl -X -H "Authorization: Bearer " http://localhost:/ [--data ]
```
Replace `` with the HTTP method (GET or POST), `` with a valid user:key pair from your `authorized_users.txt`, `` with the port number of your running Ollama Proxy Server, and `` with the target endpoint URL (e.g., "/api/generate"). If you are making a POST request, include the `--data ` option to send data in the body.For example:
```bash
curl -X POST -H "Authorization: Bearer user1:key1" http://localhost:8080/api/generate --data '{'model':'mixtral:latest,'prompt': "Once apon a time,","stream":false,"temperature": 0.3,"max_tokens": 1024}'
```
### Starting the server using the created Container-Image
To start the proxy in background with the above created image, you can use either
1) docker: `docker run -d --name ollama-proxy-server -p 8080:8080 lollms_server_proxy:latest`
2) podman: `podman run -d --name ollama-proxy-server -p 8080:8080 lollms_server_proxy:latest`