https://github.com/CXwudi/litellm-config-generator
Fill up the `model_list` field in your LiteLLM proxy configuration file
https://github.com/CXwudi/litellm-config-generator
anthropic gemini groq litellm mistral openai openrouter togetherai
Last synced: 7 months ago
JSON representation
Fill up the `model_list` field in your LiteLLM proxy configuration file
- Host: GitHub
- URL: https://github.com/CXwudi/litellm-config-generator
- Owner: CXwudi
- Created: 2024-04-26T15:49:40.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-09-07T07:08:28.000Z (about 1 year ago)
- Last Synced: 2025-02-28T12:07:07.369Z (7 months ago)
- Topics: anthropic, gemini, groq, litellm, mistral, openai, openrouter, togetherai
- Language: Python
- Homepage:
- Size: 146 KB
- Stars: 9
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# [LiteLLM](https://litellm.vercel.app/docs/proxy/quick_start) Config Generator
A helper python program to generate a configuration file for [LiteLLM](https://litellm.vercel.app/docs/proxy/quick_start) proxy.

Just provide a LiteLLM Proxy configuration file in YAML with `model_list` removed, and then run this program with the following instruction, to get the `model_list` filled up with templates.
## Usage
### 1. Environment Setup
If you have both VSCode and Docker installed, you can use the `.devcontainer` directory to setup a dev container. It contains all the necessary Python related plugins to help you modify this project. (It is also the environment I used to develop this project)
Otherwise, installed a recent Python 3.x on your system.
Follow the steps below to setup the environment:
1. First, create a virtual environment:
```sh
python3 -m venv .venv
```2. Activate the virtual environment:
On Windows:
```sh
.venv\Scripts\activate
```On Unix or MacOS:
```sh
source .venv/bin/activate
```3. Install the required packages:
```sh
pip install -r requirements.txt
```### 2. Configuration
copy the `config.example.yaml` file to `config.yaml`, and the `litellm-template.example.yaml` file to `litellm-template.yaml`.
Fill in the `config.yaml` file with your own configuration.
For the `litellm-template.yaml`, this is where you put your LiteLLM configuration file with `model_list` removed.
### 3. Modify the template to suit your needs
The `config_generator/src/component/model_poper.py` file contains the logic to generate the `model_list`. It delegates the generation to different `AbstractLLMPoper` implementation defined in the `config_generator/src/component/llm_poper` directory.
So far we have the following `AbstractLLMPoper` implementations:
| Provider | Implementation | Support Fetching Model List |
| --- | --- | --- |
| OpenAI | `config_generator/src/component/llm_poper/openai.py` | Yes |
| Google | `config_generator/src/component/llm_poper/google.py` | Yes |
| Anthropic | `config_generator/src/component/llm_poper/anthropic.py` | No |
| Mistral | `config_generator/src/component/llm_poper/mistral.py` | Yes |
| Groq | `config_generator/src/component/llm_poper/groq.py` | Yes |
| [GitHub Copilot](https://gitlab.com/aaamoon/copilot-gpt4-service) | `config_generator/src/component/llm_poper/copilot.py` | No |
| TogetherAI | `config_generator/src/component/llm_poper/togetherai.py` | Yes |
| OpenRouter | `config_generator/src/component/llm_poper/openrouter.py` | Yes |You may be interested in modifying the template in each `AbstractLLMPoper` implementation to suit your needs.
### 4. Run the program
```sh
./run.sh
```The output LiteLLM configuration file will be saved to `io.output-file`.