{"id":35835757,"url":"https://github.com/mirrowel/llm-api-key-proxy","last_synced_at":"2026-02-07T00:18:00.494Z","repository":{"id":298367253,"uuid":"999712160","full_name":"Mirrowel/LLM-API-Key-Proxy","owner":"Mirrowel","description":"Universal LLM Gateway: One API, every LLM. OpenAI/Anthropic-compatible endpoints with multi-provider translation and intelligent load-balancing.","archived":false,"fork":false,"pushed_at":"2026-01-16T16:36:35.000Z","size":2122,"stargazers_count":255,"open_issues_count":14,"forks_count":51,"subscribers_count":2,"default_branch":"main","last_synced_at":"2026-01-17T05:10:02.783Z","etag":null,"topics":["api-key","gemini-api","large-language-model","large-language-models","llm"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Mirrowel.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":".github/CODEOWNERS","security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null},"funding":{"github":"mirrowel","ko_fi":"mirrowel"}},"created_at":"2025-06-10T17:07:26.000Z","updated_at":"2026-01-16T15:00:24.000Z","dependencies_parsed_at":"2025-07-22T19:13:28.491Z","dependency_job_id":"5acbaefd-5bd8-4a0c-ab55-3f12bd3dd98c","html_url":"https://github.com/Mirrowel/LLM-API-Key-Proxy","commit_stats":null,"previous_names":["mirrowel/llm-api-key-proxy"],"tags_count":153,"template":false,"template_full_name":null,"purl":"pkg:github/Mirrowel/LLM-API-Key-Proxy","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Mirrowel%2FLLM-API-Key-Proxy","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Mirrowel%2FLLM-API-Key-Proxy/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Mirrowel%2FLLM-API-Key-Proxy/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Mirrowel%2FLLM-API-Key-Proxy/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Mirrowel","download_url":"https://codeload.github.com/Mirrowel/LLM-API-Key-Proxy/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Mirrowel%2FLLM-API-Key-Proxy/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28590676,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-19T23:59:00.777Z","status":"ssl_error","status_checked_at":"2026-01-19T23:58:54.030Z","response_time":67,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["api-key","gemini-api","large-language-model","large-language-models","llm"],"created_at":"2026-01-08T00:09:47.927Z","updated_at":"2026-02-07T00:18:00.480Z","avatar_url":"https://github.com/Mirrowel.png","language":"Python","readme":"# Universal LLM API Proxy \u0026 Resilience Library \n[![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/C0C0UZS4P)\n[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/Mirrowel/LLM-API-Key-Proxy) [![zread](https://img.shields.io/badge/Ask_Zread-_.svg?style=flat\u0026color=00b0aa\u0026labelColor=000000\u0026logo=data%3Aimage%2Fsvg%2Bxml%3Bbase64%2CPHN2ZyB3aWR0aD0iMTYiIGhlaWdodD0iMTYiIHZpZXdCb3g9IjAgMCAxNiAxNiIgZmlsbD0ibm9uZSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KPHBhdGggZD0iTTQuOTYxNTYgMS42MDAxSDIuMjQxNTZDMS44ODgxIDEuNjAwMSAxLjYwMTU2IDEuODg2NjQgMS42MDE1NiAyLjI0MDFWNC45NjAxQzEuNjAxNTYgNS4zMTM1NiAxLjg4ODEgNS42MDAxIDIuMjQxNTYgNS42MDAxSDQuOTYxNTZDNS4zMTUwMiA1LjYwMDEgNS42MDE1NiA1LjMxMzU2IDUuNjAxNTYgNC45NjAxVjIuMjQwMUM1LjYwMTU2IDEuODg2NjQgNS4zMTUwMiAxLjYwMDEgNC45NjE1NiAxLjYwMDFaIiBmaWxsPSIjZmZmIi8%2BCjxwYXRoIGQ9Ik00Ljk2MTU2IDEwLjM5OTlIMi4yNDE1NkMxLjg4ODEgMTAuMzk5OSAxLjYwMTU2IDEwLjY4NjQgMS42MDE1NiAxMS4wMzk5VjEzLjc1OTlDMS42MDE1NiAxNC4xMTM0IDEuODg4MSAxNC4zOTk5IDIuMjQxNTYgMTQuMzk5OUg0Ljk2MTU2QzUuMzE1MDIgMTQuMzk5OSA1LjYwMTU2IDE0LjExMzQgNS42MDE1NiAxMy43NTk5VjExLjAzOTlDNS42MDE1NiAxMC42ODY0IDUuMzE1MDIgMTAuMzk5OSA0Ljk2MTU2IDEwLjM5OTlaIiBmaWxsPSIjZmZmIi8%2BCjxwYXRoIGQ9Ik0xMy43NTg0IDEuNjAwMUgxMS4wMzg0QzEwLjY4NSAxLjYwMDEgMTAuMzk4NCAxLjg4NjY0IDEwLjM5ODQgMi4yNDAxVjQuOTYwMUMxMC4zOTg0IDUuMzEzNTYgMTAuNjg1IDUuNjAwMSAxMS4wMzg0IDUuNjAwMUgxMy43NTg0QzE0LjExMTkgNS42MDAxIDE0LjM5ODQgNS4zMTM1NiAxNC4zOTg0IDQuOTYwMVYyLjI0MDFDMTQuMzk4NCAxLjg4NjY0IDE0LjExMTkgMS42MDAxIDEzLjc1ODQgMS42MDAxWiIgZmlsbD0iI2ZmZiIvPgo8cGF0aCBkPSJNNCAxMkwxMiA0TDQgMTJaIiBmaWxsPSIjZmZmIi8%2BCjxwYXRoIGQ9Ik00IDEyTDEyIDQiIHN0cm9rZT0iI2ZmZiIgc3Ryb2tlLXdpZHRoPSIxLjUiIHN0cm9rZS1saW5lY2FwPSJyb3VuZCIvPgo8L3N2Zz4K\u0026logoColor=ffffff)](https://zread.ai/Mirrowel/LLM-API-Key-Proxy)\n\n**One proxy. Any LLM provider. Zero code changes.**\n\nA self-hosted proxy that provides OpenAI and Anthropic compatible API endpoints for all your LLM providers. Works with any application that supports custom OpenAI or Anthropic base URLs—including Claude Code, Opencode,  and more—no code changes required in your existing tools.\n\nThis project consists of two components:\n\n1. **The API Proxy** — A FastAPI application providing universal `/v1/chat/completions` (OpenAI) and `/v1/messages` (Anthropic) endpoints\n2. **The Resilience Library** — A reusable Python library for intelligent API key management, rotation, and failover\n\n---\n\n## Why Use This?\n\n- **Universal Compatibility** — Works with any app supporting OpenAI or Anthropic APIs: Claude Code, Opencode, Continue, Roo/Kilo Code, Cursor, JanitorAI, SillyTavern, custom applications, and more\n- **One Endpoint, Many Providers** — Configure Gemini, OpenAI, Anthropic, and [any LiteLLM-supported provider](https://docs.litellm.ai/docs/providers) once. Access them all through a single API key\n- **Anthropic API Compatible** — Use Claude Code or any Anthropic SDK client with non-Anthropic providers like Gemini, OpenAI, or custom models\n- **Built-in Resilience** — Automatic key rotation, failover on errors, rate limit handling, and intelligent cooldowns\n- **Exclusive Provider Support** — Includes custom providers not available elsewhere: **Antigravity** (Gemini 3 + Claude Sonnet/Opus 4.5), **Gemini CLI**, **Qwen Code**, and **iFlow**\n\n---\n\n## Quick Start\n\n### Windows\n\n1. **Download** the latest release from [GitHub Releases](https://github.com/Mirrowel/LLM-API-Key-Proxy/releases/latest)\n2. **Unzip** the downloaded file\n3. **Run** `proxy_app.exe` — the interactive TUI launcher opens\n\n\u003c!-- TODO: Add TUI main menu screenshot here --\u003e\n\n### macOS / Linux\n\n```bash\n# Download and extract the release for your platform\nchmod +x proxy_app\n./proxy_app\n```\n\n### Docker\n\n**Using the pre-built image (recommended):**\n\n```bash\n# Pull and run directly\ndocker run -d \\\n  --name llm-api-proxy \\\n  -p 8000:8000 \\\n  -v $(pwd)/.env:/app/.env:ro \\\n  -v $(pwd)/oauth_creds:/app/oauth_creds \\\n  -v $(pwd)/logs:/app/logs \\\n  -e SKIP_OAUTH_INIT_CHECK=true \\\n  ghcr.io/mirrowel/llm-api-key-proxy:latest\n```\n\n**Using Docker Compose:**\n\n```bash\n# Create your .env file and key_usage.json first, then:\ncp .env.example .env\ntouch key_usage.json\ndocker compose up -d\n```\n\n\u003e **Important:** You must create both `.env` and `key_usage.json` files before running Docker Compose. If `key_usage.json` doesn't exist, Docker will create it as a directory instead of a file, causing errors.\n\n\u003e **Note:** For OAuth providers, complete authentication locally first using the credential tool, then mount the `oauth_creds/` directory or export credentials to environment variables.\n\n### From Source\n\n```bash\ngit clone https://github.com/Mirrowel/LLM-API-Key-Proxy.git\ncd LLM-API-Key-Proxy\npython3 -m venv venv\nsource venv/bin/activate  # Windows: venv\\Scripts\\activate\npip install -r requirements.txt\npython src/proxy_app/main.py\n```\n\n\u003e **Tip:** Running with command-line arguments (e.g., `--host 0.0.0.0 --port 8000`) bypasses the TUI and starts the proxy directly.\n\n---\n\n## Connecting to the Proxy\n\nOnce the proxy is running, configure your application with these settings:\n\n| Setting | Value |\n|---------|-------|\n| **Base URL / API Endpoint** | `http://127.0.0.1:8000/v1` |\n| **API Key** | Your `PROXY_API_KEY` |\n\n### Model Format: `provider/model_name`\n\n**Important:** Models must be specified in the format `provider/model_name`. The `provider/` prefix tells the proxy which backend to route the request to.\n\n```\ngemini/gemini-2.5-flash          ← Gemini API\nopenai/gpt-4o                    ← OpenAI API\nanthropic/claude-3-5-sonnet      ← Anthropic API\nopenrouter/anthropic/claude-3-opus  ← OpenRouter\ngemini_cli/gemini-2.5-pro        ← Gemini CLI (OAuth)\nantigravity/gemini-3-pro-preview ← Antigravity (Gemini 3, Claude Opus 4.5)\n```\n\n### Usage Examples\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003ePython (OpenAI Library)\u003c/b\u003e\u003c/summary\u003e\n\n```python\nfrom openai import OpenAI\n\nclient = OpenAI(\n    base_url=\"http://127.0.0.1:8000/v1\",\n    api_key=\"your-proxy-api-key\"\n)\n\nresponse = client.chat.completions.create(\n    model=\"gemini/gemini-2.5-flash\",  # provider/model format\n    messages=[{\"role\": \"user\", \"content\": \"Hello!\"}]\n)\nprint(response.choices[0].message.content)\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003ecurl\u003c/b\u003e\u003c/summary\u003e\n\n```bash\ncurl -X POST http://127.0.0.1:8000/v1/chat/completions \\\n  -H \"Content-Type: application/json\" \\\n  -H \"Authorization: Bearer your-proxy-api-key\" \\\n  -d '{\n    \"model\": \"gemini/gemini-2.5-flash\",\n    \"messages\": [{\"role\": \"user\", \"content\": \"What is the capital of France?\"}]\n  }'\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eJanitorAI / SillyTavern / Other Chat UIs\u003c/b\u003e\u003c/summary\u003e\n\n1. Go to **API Settings**\n2. Select **\"Proxy\"** or **\"Custom OpenAI\"** mode\n3. Configure:\n   - **API URL:** `http://127.0.0.1:8000/v1`\n   - **API Key:** Your `PROXY_API_KEY`\n   - **Model:** `provider/model_name` (e.g., `gemini/gemini-2.5-flash`)\n4. Save and start chatting\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eContinue / Cursor / IDE Extensions\u003c/b\u003e\u003c/summary\u003e\n\nIn your configuration file (e.g., `config.json`):\n\n```json\n{\n  \"models\": [\n    {\n      \"title\": \"Gemini via Proxy\",\n      \"provider\": \"openai\",\n      \"model\": \"gemini/gemini-2.5-flash\",\n      \"apiBase\": \"http://127.0.0.1:8000/v1\",\n      \"apiKey\": \"your-proxy-api-key\"\n    }\n  ]\n}\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eClaude Code\u003c/b\u003e\u003c/summary\u003e\n\nClaude Code natively supports custom Anthropic API endpoints. The recommended setup is to edit your Claude Code `settings.json`:\n\n```json\n{\n  \"env\": {\n    \"ANTHROPIC_AUTH_TOKEN\": \"your-proxy-api-key\",\n    \"ANTHROPIC_BASE_URL\": \"http://127.0.0.1:8000\",\n    \"ANTHROPIC_DEFAULT_OPUS_MODEL\": \"gemini/gemini-3-pro\",\n    \"ANTHROPIC_DEFAULT_SONNET_MODEL\": \"gemini/gemini-3-flash\",\n    \"ANTHROPIC_DEFAULT_HAIKU_MODEL\": \"openai/gpt-5-mini\"\n  }\n}\n```\n\nNow you can use Claude Code with Gemini, OpenAI, or any other configured provider.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eAnthropic Python SDK\u003c/b\u003e\u003c/summary\u003e\n\n```python\nfrom anthropic import Anthropic\n\nclient = Anthropic(\n    base_url=\"http://127.0.0.1:8000\",\n    api_key=\"your-proxy-api-key\"\n)\n\n# Use any provider through Anthropic's API format\nresponse = client.messages.create(\n    model=\"gemini/gemini-3-flash\",  # provider/model format\n    max_tokens=1024,\n    messages=[{\"role\": \"user\", \"content\": \"Hello!\"}]\n)\nprint(response.content[0].text)\n```\n\n\u003c/details\u003e\n\n### API Endpoints\n\n| Endpoint | Description |\n|----------|-------------|\n| `GET /` | Status check — confirms proxy is running |\n| `POST /v1/chat/completions` | Chat completions (OpenAI format) |\n| `POST /v1/messages` | Chat completions (Anthropic format) — Claude Code compatible |\n| `POST /v1/messages/count_tokens` | Count tokens for Anthropic-format requests |\n| `POST /v1/embeddings` | Text embeddings |\n| `GET /v1/models` | List all available models with pricing \u0026 capabilities |\n| `GET /v1/models/{model_id}` | Get details for a specific model |\n| `GET /v1/providers` | List configured providers |\n| `POST /v1/token-count` | Calculate token count for a payload |\n| `POST /v1/cost-estimate` | Estimate cost based on token counts |\n\n\u003e **Tip:** The `/v1/models` endpoint is useful for discovering available models in your client. Many apps can fetch this list automatically. Add `?enriched=false` for a minimal response without pricing data.\n\n---\n\n## Managing Credentials\n\nThe proxy includes an interactive tool for managing all your API keys and OAuth credentials.\n\n### Using the TUI\n\n\u003c!-- TODO: Add TUI credentials menu screenshot here --\u003e\n\n1. Run the proxy without arguments to open the TUI\n2. Select **\"🔑 Manage Credentials\"**\n3. Choose to add API keys or OAuth credentials\n\n### Using the Command Line\n\n```bash\npython -m rotator_library.credential_tool\n```\n\n### Credential Types\n\n| Type | Providers | How to Add |\n|------|-----------|------------|\n| **API Keys** | Gemini, OpenAI, Anthropic, OpenRouter, Groq, Mistral, NVIDIA, Cohere, Chutes | Enter key in TUI or add to `.env` |\n| **OAuth** | Gemini CLI, Antigravity, Qwen Code, iFlow | Interactive browser login via credential tool |\n\n### The `.env` File\n\nCredentials are stored in a `.env` file. You can edit it directly or use the TUI:\n\n```env\n# Required: Authentication key for YOUR proxy\nPROXY_API_KEY=\"your-secret-proxy-key\"\n\n# Provider API Keys (add multiple with _1, _2, etc.)\nGEMINI_API_KEY_1=\"your-gemini-key\"\nGEMINI_API_KEY_2=\"another-gemini-key\"\nOPENAI_API_KEY_1=\"your-openai-key\"\nANTHROPIC_API_KEY_1=\"your-anthropic-key\"\n```\n\n\u003e Copy `.env.example` to `.env` as a starting point.\n\n---\n\n## The Resilience Library\n\nThe proxy is powered by a standalone Python library that you can use directly in your own applications.\n\n### Key Features\n\n- **Async-native** with `asyncio` and `httpx`\n- **Intelligent key selection** with tiered, model-aware locking\n- **Deadline-driven requests** with configurable global timeout\n- **Automatic failover** between keys on errors\n- **OAuth support** for Gemini CLI, Antigravity, Qwen, iFlow\n- **Stateless deployment ready** — load credentials from environment variables\n\n### Basic Usage\n\n```python\nfrom rotator_library import RotatingClient\n\nclient = RotatingClient(\n    api_keys={\"gemini\": [\"key1\", \"key2\"], \"openai\": [\"key3\"]},\n    global_timeout=30,\n    max_retries=2\n)\n\nasync with client:\n    response = await client.acompletion(\n        model=\"gemini/gemini-2.5-flash\",\n        messages=[{\"role\": \"user\", \"content\": \"Hello!\"}]\n    )\n```\n\n### Library Documentation\n\nSee the [Library README](src/rotator_library/README.md) for complete documentation including:\n- All initialization parameters\n- Streaming support\n- Error handling and cooldown strategies\n- Provider plugin system\n- Credential prioritization\n\n---\n\n## Interactive TUI\n\nThe proxy includes a powerful text-based UI for configuration and management.\n\n\u003c!-- TODO: Add TUI main menu screenshot here --\u003e\n\n### TUI Features\n\n- **🚀 Run Proxy** — Start the server with saved settings\n- **⚙️ Configure Settings** — Host, port, API key, request logging\n- **🔑 Manage Credentials** — Add/edit API keys and OAuth credentials\n- **📊 View Status** — See configured providers and credential counts\n- **🔧 Advanced Settings** — Custom providers, model definitions, concurrency\n\n### Configuration Files\n\n| File | Contents |\n|------|----------|\n| `.env` | All credentials and advanced settings |\n| `launcher_config.json` | TUI-specific settings (host, port, logging) |\n\n---\n\n## Features\n\n### Core Capabilities\n\n- **Universal OpenAI-compatible endpoint** for all providers\n- **Multi-provider support** via [LiteLLM](https://docs.litellm.ai/docs/providers) fallback\n- **Automatic key rotation** and load balancing\n- **Interactive TUI** for easy configuration\n- **Detailed request logging** for debugging\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003e🛡️ Resilience \u0026 High Availability\u003c/b\u003e\u003c/summary\u003e\n\n- **Global timeout** with deadline-driven retries\n- **Escalating cooldowns** per model (10s → 30s → 60s → 120s)\n- **Key-level lockouts** for consistently failing keys\n- **Stream error detection** and graceful recovery\n- **Batch embedding aggregation** for improved throughput\n- **Automatic daily resets** for cooldowns and usage stats\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003e🔑 Credential Management\u003c/b\u003e\u003c/summary\u003e\n\n- **Auto-discovery** of API keys from environment variables\n- **OAuth discovery** from standard paths (`~/.gemini/`, `~/.qwen/`, `~/.iflow/`)\n- **Duplicate detection** warns when same account added multiple times\n- **Credential prioritization** — paid tier used before free tier\n- **Stateless deployment** — export OAuth to environment variables\n- **Local-first storage** — credentials isolated in `oauth_creds/` directory\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003e⚙️ Advanced Configuration\u003c/b\u003e\u003c/summary\u003e\n\n- **Model whitelists/blacklists** with wildcard support\n- **Per-provider concurrency limits** (`MAX_CONCURRENT_REQUESTS_PER_KEY_\u003cPROVIDER\u003e`)\n- **Rotation modes** — balanced (distribute load) or sequential (use until exhausted)\n- **Priority multipliers** — higher concurrency for paid credentials\n- **Model quota groups** — shared cooldowns for related models\n- **Temperature override** — prevent tool hallucination issues\n- **Weighted random rotation** — unpredictable selection patterns\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003e🔌 Provider-Specific Features\u003c/b\u003e\u003c/summary\u003e\n\n**Gemini CLI:**\n\n- Zero-config Google Cloud project discovery\n- Internal API access with higher rate limits\n- Automatic fallback to preview models on rate limit\n- Paid vs free tier detection\n\n**Antigravity:**\n\n- Gemini 3 Pro with `thinkingLevel` support\n- Gemini 2.5 Flash/Flash Lite with thinking mode\n- Claude Opus 4.5 (thinking mode)\n- Claude Sonnet 4.5 (thinking and non-thinking)\n- GPT-OSS 120B Medium\n- Thought signature caching for multi-turn conversations\n- Tool hallucination prevention\n- Quota baseline tracking with background refresh\n- Parallel tool usage instruction injection\n- **Quota Groups**: Models that share quota are automatically grouped:\n  - Claude/GPT-OSS: `claude-sonnet-4-5`, `claude-opus-4-5`, `gpt-oss-120b-medium`\n  - Gemini 3 Pro: `gemini-3-pro-high`, `gemini-3-pro-low`, `gemini-3-pro-preview`\n  - Gemini 2.5 Flash: `gemini-2.5-flash`, `gemini-2.5-flash-thinking`, `gemini-2.5-flash-lite`\n  - All models in a group deplete the usage of the group equally. So in claude group - it is beneficial to use only Opus, and forget about Sonnet and GPT-OSS.\n\n**Qwen Code:**\n\n- Dual auth (API key + OAuth Device Flow)\n- `\u003cthink\u003e` tag parsing as `reasoning_content`\n- Tool schema cleaning\n\n**iFlow:**\n\n- Dual auth (API key + OAuth Authorization Code)\n- Hybrid auth with separate API key fetch\n- Tool schema cleaning\n\n**NVIDIA NIM:**\n\n- Dynamic model discovery\n- DeepSeek thinking support\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003e📝 Logging \u0026 Debugging\u003c/b\u003e\u003c/summary\u003e\n\n- **Per-request file logging** with `--enable-request-logging`\n- **Unique request directories** with full transaction details\n- **Streaming chunk capture** for debugging\n- **Performance metadata** (duration, tokens, model used)\n- **Provider-specific logs** for Qwen, iFlow, Antigravity\n\n\u003c/details\u003e\n\n---\n\n## Advanced Configuration\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eEnvironment Variables Reference\u003c/b\u003e\u003c/summary\u003e\n\n### Proxy Settings\n\n| Variable | Description | Default |\n|----------|-------------|---------|\n| `PROXY_API_KEY` | Authentication key for your proxy | Required |\n| `OAUTH_REFRESH_INTERVAL` | Token refresh check interval (seconds) | `600` |\n| `SKIP_OAUTH_INIT_CHECK` | Skip interactive OAuth setup on startup | `false` |\n\n### Per-Provider Settings\n\n| Pattern | Description | Example |\n|---------|-------------|---------|\n| `\u003cPROVIDER\u003e_API_KEY_\u003cN\u003e` | API key for provider | `GEMINI_API_KEY_1` |\n| `MAX_CONCURRENT_REQUESTS_PER_KEY_\u003cPROVIDER\u003e` | Concurrent request limit | `MAX_CONCURRENT_REQUESTS_PER_KEY_OPENAI=3` |\n| `ROTATION_MODE_\u003cPROVIDER\u003e` | `balanced` or `sequential` | `ROTATION_MODE_GEMINI=sequential` |\n| `IGNORE_MODELS_\u003cPROVIDER\u003e` | Blacklist (comma-separated, supports `*`) | `IGNORE_MODELS_OPENAI=*-preview*` |\n| `WHITELIST_MODELS_\u003cPROVIDER\u003e` | Whitelist (overrides blacklist) | `WHITELIST_MODELS_GEMINI=gemini-2.5-pro` |\n\n### Advanced Features\n\n| Variable | Description |\n|----------|-------------|\n| `ROTATION_TOLERANCE` | `0.0`=deterministic, `3.0`=weighted random (default) |\n| `CONCURRENCY_MULTIPLIER_\u003cPROVIDER\u003e_PRIORITY_\u003cN\u003e` | Concurrency multiplier per priority tier |\n| `QUOTA_GROUPS_\u003cPROVIDER\u003e_\u003cGROUP\u003e` | Models sharing quota limits |\n| `OVERRIDE_TEMPERATURE_ZERO` | `remove` or `set` to prevent tool hallucination |\n| `GEMINI_CLI_QUOTA_REFRESH_INTERVAL` | Quota baseline refresh interval in seconds (default: 300) |\n| `ANTIGRAVITY_QUOTA_REFRESH_INTERVAL` | Quota baseline refresh interval in seconds (default: 300) |\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eModel Filtering (Whitelists \u0026 Blacklists)\u003c/b\u003e\u003c/summary\u003e\n\nControl which models are exposed through your proxy.\n\n### Blacklist Only\n\n```env\n# Hide all preview models\nIGNORE_MODELS_OPENAI=\"*-preview*\"\n```\n\n### Pure Whitelist Mode\n\n```env\n# Block all, then allow specific models\nIGNORE_MODELS_GEMINI=\"*\"\nWHITELIST_MODELS_GEMINI=\"gemini-2.5-pro,gemini-2.5-flash\"\n```\n\n### Exemption Mode\n\n```env\n# Block preview models, but allow one specific preview\nIGNORE_MODELS_OPENAI=\"*-preview*\"\nWHITELIST_MODELS_OPENAI=\"gpt-4o-2024-08-06-preview\"\n```\n\n**Logic order:** Whitelist check → Blacklist check → Default allow\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eConcurrency \u0026 Rotation Settings\u003c/b\u003e\u003c/summary\u003e\n\n### Concurrency Limits\n\n```env\n# Allow 3 concurrent requests per OpenAI key\nMAX_CONCURRENT_REQUESTS_PER_KEY_OPENAI=3\n\n# Default is 1 (no concurrency)\nMAX_CONCURRENT_REQUESTS_PER_KEY_GEMINI=1\n```\n\n### Rotation Modes\n\n```env\n# balanced (default): Distribute load evenly - best for per-minute rate limits\nROTATION_MODE_OPENAI=balanced\n\n# sequential: Use until exhausted - best for daily/weekly quotas\nROTATION_MODE_GEMINI=sequential\n```\n\n### Priority Multipliers\n\nPaid credentials can handle more concurrent requests:\n\n```env\n# Priority 1 (paid ultra): 10x concurrency\nCONCURRENCY_MULTIPLIER_ANTIGRAVITY_PRIORITY_1=10\n\n# Priority 2 (standard paid): 3x\nCONCURRENCY_MULTIPLIER_ANTIGRAVITY_PRIORITY_2=3\n```\n\n### Model Quota Groups\n\nModels sharing quota limits:\n\n```env\n# Claude models share quota - when one hits limit, both cool down\nQUOTA_GROUPS_ANTIGRAVITY_CLAUDE=\"claude-sonnet-4-5,claude-opus-4-5\"\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eTimeout Configuration\u003c/b\u003e\u003c/summary\u003e\n\nFine-grained control over HTTP timeouts:\n\n```env\nTIMEOUT_CONNECT=30              # Connection establishment\nTIMEOUT_WRITE=30                # Request body send\nTIMEOUT_POOL=60                 # Connection pool acquisition\nTIMEOUT_READ_STREAMING=180      # Between streaming chunks (3 min)\nTIMEOUT_READ_NON_STREAMING=600  # Full response wait (10 min)\n```\n\n**Recommendations:**\n\n- Long thinking tasks: Increase `TIMEOUT_READ_STREAMING` to 300-360s\n- Unstable network: Increase `TIMEOUT_CONNECT` to 60s\n- Large outputs: Increase `TIMEOUT_READ_NON_STREAMING` to 900s+\n\n\u003c/details\u003e\n\n---\n\n## OAuth Providers\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eGemini CLI\u003c/b\u003e\u003c/summary\u003e\n\nUses Google OAuth to access internal Gemini endpoints with higher rate limits.\n\n**Setup:**\n\n1. Run `python -m rotator_library.credential_tool`\n2. Select \"Add OAuth Credential\" → \"Gemini CLI\"\n3. Complete browser authentication\n4. Credentials saved to `oauth_creds/gemini_cli_oauth_1.json`\n\n**Features:**\n\n- Zero-config project discovery\n- Automatic free-tier project onboarding\n- Paid vs free tier detection\n- Smart fallback on rate limits\n- Quota baseline tracking with background refresh (accurate remaining quota estimates)\n- Sequential rotation mode (uses credentials until quota exhausted)\n\n**Quota Groups:** Models that share quota are automatically grouped:\n- **Pro**: `gemini-2.5-pro`, `gemini-3-pro-preview`\n- **2.5-Flash**: `gemini-2.0-flash`, `gemini-2.5-flash`, `gemini-2.5-flash-lite`\n- **3-Flash**: `gemini-3-flash-preview`\n\nAll models in a group deplete the shared quota equally. 24-hour per-model quota windows.\n\n**Environment Variables (for stateless deployment):**\n\nSingle credential (legacy):\n```env\nGEMINI_CLI_ACCESS_TOKEN=\"ya29.your-access-token\"\nGEMINI_CLI_REFRESH_TOKEN=\"1//your-refresh-token\"\nGEMINI_CLI_EXPIRY_DATE=\"1234567890000\"\nGEMINI_CLI_EMAIL=\"your-email@gmail.com\"\nGEMINI_CLI_PROJECT_ID=\"your-gcp-project-id\"  # Optional\nGEMINI_CLI_TIER=\"standard-tier\"  # Optional: standard-tier or free-tier\n```\n\nMultiple credentials (use `_N_` suffix where N is 1, 2, 3...):\n```env\nGEMINI_CLI_1_ACCESS_TOKEN=\"ya29.first-token\"\nGEMINI_CLI_1_REFRESH_TOKEN=\"1//first-refresh\"\nGEMINI_CLI_1_EXPIRY_DATE=\"1234567890000\"\nGEMINI_CLI_1_EMAIL=\"first@gmail.com\"\nGEMINI_CLI_1_PROJECT_ID=\"project-1\"\nGEMINI_CLI_1_TIER=\"standard-tier\"\n\nGEMINI_CLI_2_ACCESS_TOKEN=\"ya29.second-token\"\nGEMINI_CLI_2_REFRESH_TOKEN=\"1//second-refresh\"\nGEMINI_CLI_2_EXPIRY_DATE=\"1234567890000\"\nGEMINI_CLI_2_EMAIL=\"second@gmail.com\"\nGEMINI_CLI_2_PROJECT_ID=\"project-2\"\nGEMINI_CLI_2_TIER=\"free-tier\"\n```\n\n**Feature Toggles:**\n```env\nGEMINI_CLI_QUOTA_REFRESH_INTERVAL=300  # Quota refresh interval in seconds (default: 300 = 5 min)\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eAntigravity (Gemini 3 + Claude Opus 4.5)\u003c/b\u003e\u003c/summary\u003e\n\nAccess Google's internal Antigravity API for cutting-edge models.\n\n**Supported Models:**\n\n- **Gemini 3 Pro** — with `thinkingLevel` support (low/high)\n- **Gemini 2.5 Flash** — with thinking mode support\n- **Gemini 2.5 Flash Lite** — configurable thinking budget\n- **Claude Opus 4.5** — Anthropic's most powerful model (thinking mode only)\n- **Claude Sonnet 4.5** — supports both thinking and non-thinking modes\n- **GPT-OSS 120B** — OpenAI-compatible model\n\n**Setup:**\n\n1. Run `python -m rotator_library.credential_tool`\n2. Select \"Add OAuth Credential\" → \"Antigravity\"\n3. Complete browser authentication\n\n**Advanced Features:**\n\n- Thought signature caching for multi-turn conversations\n- Tool hallucination prevention via parameter signature injection\n- Automatic thinking block sanitization for Claude\n- Credential prioritization (paid resets every 5 hours, free weekly)\n- Quota baseline tracking with background refresh (accurate remaining quota estimates)\n- Parallel tool usage instruction injection for Claude\n\n**Environment Variables:**\n\n```env\nANTIGRAVITY_ACCESS_TOKEN=\"ya29.your-access-token\"\nANTIGRAVITY_REFRESH_TOKEN=\"1//your-refresh-token\"\nANTIGRAVITY_EXPIRY_DATE=\"1234567890000\"\nANTIGRAVITY_EMAIL=\"your-email@gmail.com\"\n\n# Feature toggles\nANTIGRAVITY_ENABLE_SIGNATURE_CACHE=true\nANTIGRAVITY_GEMINI3_TOOL_FIX=true\nANTIGRAVITY_QUOTA_REFRESH_INTERVAL=300  # Quota refresh interval (seconds)\nANTIGRAVITY_PARALLEL_TOOL_INSTRUCTION_CLAUDE=true  # Parallel tool instruction for Claude\n```\n\n\u003e **Note:** Gemini 3 models require a paid-tier Google Cloud project.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eQwen Code\u003c/b\u003e\u003c/summary\u003e\n\nUses OAuth Device Flow for Qwen/Dashscope APIs.\n\n**Setup:**\n\n1. Run the credential tool\n2. Select \"Add OAuth Credential\" → \"Qwen Code\"\n3. Enter the code displayed in your browser\n4. Or add API key directly: `QWEN_CODE_API_KEY_1=\"your-key\"`\n\n**Features:**\n\n- Dual auth (API key or OAuth)\n- `\u003cthink\u003e` tag parsing as `reasoning_content`\n- Automatic tool schema cleaning\n- Custom models via `QWEN_CODE_MODELS` env var\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eiFlow\u003c/b\u003e\u003c/summary\u003e\n\nUses OAuth Authorization Code flow with local callback server.\n\n**Setup:**\n\n1. Run the credential tool\n2. Select \"Add OAuth Credential\" → \"iFlow\"\n3. Complete browser authentication (callback on port 11451)\n4. Or add API key directly: `IFLOW_API_KEY_1=\"sk-your-key\"`\n\n**Features:**\n\n- Dual auth (API key or OAuth)\n- Hybrid auth (OAuth token fetches separate API key)\n- Automatic tool schema cleaning\n- Custom models via `IFLOW_MODELS` env var\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eStateless Deployment (Export to Environment Variables)\u003c/b\u003e\u003c/summary\u003e\n\nFor platforms without file persistence (Railway, Render, Vercel):\n\n1. **Set up credentials locally:**\n\n   ```bash\n   python -m rotator_library.credential_tool\n   # Complete OAuth flows\n   ```\n\n2. **Export to environment variables:**\n\n   ```bash\n   python -m rotator_library.credential_tool\n   # Select \"Export [Provider] to .env\"\n   ```\n\n3. **Copy generated variables to your platform:**\n   The tool creates files like `gemini_cli_credential_1.env` containing all necessary variables.\n\n4. **Set `SKIP_OAUTH_INIT_CHECK=true`** to skip interactive validation on startup.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eOAuth Callback Port Configuration\u003c/b\u003e\u003c/summary\u003e\n\nCustomize OAuth callback ports if defaults conflict:\n\n| Provider    | Default Port | Environment Variable     |\n| ----------- | ------------ | ------------------------ |\n| Gemini CLI  | 8085         | `GEMINI_CLI_OAUTH_PORT`  |\n| Antigravity | 51121        | `ANTIGRAVITY_OAUTH_PORT` |\n| iFlow       | 11451        | `IFLOW_OAUTH_PORT`       |\n\n\u003c/details\u003e\n\n---\n\n## Deployment\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eCommand-Line Arguments\u003c/b\u003e\u003c/summary\u003e\n\n```bash\npython src/proxy_app/main.py [OPTIONS]\n\nOptions:\n  --host TEXT                Host to bind (default: 0.0.0.0)\n  --port INTEGER             Port to run on (default: 8000)\n  --enable-request-logging   Enable detailed per-request logging\n  --add-credential           Launch interactive credential setup tool\n```\n\n**Examples:**\n\n```bash\n# Run on custom port\npython src/proxy_app/main.py --host 127.0.0.1 --port 9000\n\n# Run with logging\npython src/proxy_app/main.py --enable-request-logging\n\n# Add credentials without starting proxy\npython src/proxy_app/main.py --add-credential\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eRender / Railway / Vercel\u003c/b\u003e\u003c/summary\u003e\n\nSee the [Deployment Guide](Deployment%20guide.md) for complete instructions.\n\n**Quick Setup:**\n\n1. Fork the repository\n2. Create a `.env` file with your credentials\n3. Create a new Web Service pointing to your repo\n4. Set build command: `pip install -r requirements.txt`\n5. Set start command: `uvicorn src.proxy_app.main:app --host 0.0.0.0 --port $PORT`\n6. Upload `.env` as a secret file\n\n**OAuth Credentials:**\nExport OAuth credentials to environment variables using the credential tool, then add them to your platform's environment settings.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eDocker\u003c/b\u003e\u003c/summary\u003e\n\nThe proxy is available as a multi-architecture Docker image (amd64/arm64) from GitHub Container Registry.\n\n**Quick Start with Docker Compose:**\n\n```bash\n# 1. Create your .env file with PROXY_API_KEY and provider keys\ncp .env.example .env\nnano .env\n\n# 2. Create key_usage.json file (required before first run)\ntouch key_usage.json\n\n# 3. Start the proxy\ndocker compose up -d\n\n# 4. Check logs\ndocker compose logs -f\n```\n\n\u003e **Important:** You must create `key_usage.json` before running Docker Compose. If this file doesn't exist on the host, Docker will create it as a directory instead of a file, causing the container to fail.\n\n**Manual Docker Run:**\n\n```bash\n# Create key_usage.json if it doesn't exist\ntouch key_usage.json\n\ndocker run -d \\\n  --name llm-api-proxy \\\n  --restart unless-stopped \\\n  -p 8000:8000 \\\n  -v $(pwd)/.env:/app/.env:ro \\\n  -v $(pwd)/oauth_creds:/app/oauth_creds \\\n  -v $(pwd)/logs:/app/logs \\\n  -v $(pwd)/key_usage.json:/app/key_usage.json \\\n  -e SKIP_OAUTH_INIT_CHECK=true \\\n  -e PYTHONUNBUFFERED=1 \\\n  ghcr.io/mirrowel/llm-api-key-proxy:latest\n```\n\n**Development with Local Build:**\n\n```bash\n# Build and run locally\ndocker compose -f docker-compose.dev.yml up -d --build\n```\n\n**Volume Mounts:**\n\n| Path             | Purpose                                |\n| ---------------- | -------------------------------------- |\n| `.env`           | Configuration and API keys (read-only) |\n| `oauth_creds/`   | OAuth credential files (persistent)    |\n| `logs/`          | Request logs and detailed logging      |\n| `key_usage.json` | Usage statistics persistence           |\n\n**Image Tags:**\n\n| Tag                     | Description                                |\n| ----------------------- | ------------------------------------------ |\n| `latest`                | Latest stable from `main` branch           |\n| `dev-latest`            | Latest from `dev` branch                   |\n| `YYYYMMDD-HHMMSS-\u003csha\u003e` | Specific version with timestamp and commit |\n\n**OAuth with Docker:**\n\nFor OAuth providers (Antigravity, Gemini CLI, etc.), you must authenticate locally first:\n\n1. Run `python -m rotator_library.credential_tool` on your local machine\n2. Complete OAuth flows in browser\n3. Either:\n   - Mount `oauth_creds/` directory to container, or\n   - Export credentials to `.env` using the export option\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eCustom VPS / Systemd\u003c/b\u003e\u003c/summary\u003e\n\n**Option 1: Authenticate locally, deploy credentials**\n\n1. Complete OAuth flows on your local machine\n2. Export to environment variables\n3. Deploy `.env` to your server\n\n**Option 2: SSH Port Forwarding**\n\n```bash\n# Forward callback ports through SSH\nssh -L 51121:localhost:51121 -L 8085:localhost:8085 user@your-vps\n\n# Then run credential tool on the VPS\n```\n\n**Systemd Service:**\n\n```ini\n[Unit]\nDescription=LLM API Key Proxy\nAfter=network.target\n\n[Service]\nType=simple\nWorkingDirectory=/path/to/LLM-API-Key-Proxy\nExecStart=/path/to/python -m uvicorn src.proxy_app.main:app --host 0.0.0.0 --port 8000\nRestart=always\n\n[Install]\nWantedBy=multi-user.target\n```\n\nSee [VPS Deployment](Deployment%20guide.md#appendix-deploying-to-a-custom-vps) for complete guide.\n\n\u003c/details\u003e\n\n---\n\n## Troubleshooting\n\n| Issue | Solution |\n|-------|----------|\n| `401 Unauthorized` | Verify `PROXY_API_KEY` matches your `Authorization: Bearer` header exactly |\n| `500 Internal Server Error` | Check provider key validity; enable `--enable-request-logging` for details |\n| All keys on cooldown | All keys failed recently; check `logs/detailed_logs/` for upstream errors |\n| Model not found | Verify format is `provider/model_name` (e.g., `gemini/gemini-2.5-flash`) |\n| OAuth callback failed | Ensure callback port (8085, 51121, 11451) isn't blocked by firewall |\n| Streaming hangs | Increase `TIMEOUT_READ_STREAMING`; check provider status |\n\n**Detailed Logs:**\n\nWhen `--enable-request-logging` is enabled, check `logs/detailed_logs/` for:\n\n- `request.json` — Exact request payload\n- `final_response.json` — Complete response or error\n- `streaming_chunks.jsonl` — All SSE chunks received\n- `metadata.json` — Performance metrics\n\n---\n\n## Documentation\n\n| Document | Description |\n|----------|-------------|\n| [Technical Documentation](DOCUMENTATION.md) | Architecture, internals, provider implementations |\n| [Library README](src/rotator_library/README.md) | Using the resilience library directly |\n| [Deployment Guide](Deployment%20guide.md) | Hosting on Render, Railway, VPS |\n| [.env.example](.env.example) | Complete environment variable reference |\n\n---\n\n## License\n\nThis project is dual-licensed:\n\n- **Proxy Application** (`src/proxy_app/`) — [MIT License](src/proxy_app/LICENSE)\n- **Resilience Library** (`src/rotator_library/`) — [LGPL-3.0](src/rotator_library/COPYING.LESSER)\n","funding_links":["https://github.com/sponsors/mirrowel","https://ko-fi.com/mirrowel","https://ko-fi.com/C0C0UZS4P"],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmirrowel%2Fllm-api-key-proxy","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmirrowel%2Fllm-api-key-proxy","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmirrowel%2Fllm-api-key-proxy/lists"}