{"id":48212754,"url":"https://github.com/whtsky/copilot2api","last_synced_at":"2026-04-04T18:52:12.602Z","repository":{"id":343441898,"uuid":"1172571312","full_name":"whtsky/copilot2api","owner":"whtsky","description":"Lightweight Go proxy that exposes GitHub Copilot as OpenAI and Anthropic compatible API endpoints","archived":false,"fork":false,"pushed_at":"2026-04-01T16:19:09.000Z","size":134,"stargazers_count":3,"open_issues_count":1,"forks_count":1,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-04-01T16:23:48.378Z","etag":null,"topics":["anthropic-api","api-proxy","claude","copilot","github-copilot","go","llm","openai-api"],"latest_commit_sha":null,"homepage":null,"language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/whtsky.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-03-04T13:11:52.000Z","updated_at":"2026-04-01T16:19:14.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/whtsky/copilot2api","commit_stats":null,"previous_names":["whtsky/copilot2api"],"tags_count":2,"template":false,"template_full_name":null,"purl":"pkg:github/whtsky/copilot2api","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/whtsky%2Fcopilot2api","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/whtsky%2Fcopilot2api/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/whtsky%2Fcopilot2api/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/whtsky%2Fcopilot2api/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/whtsky","download_url":"https://codeload.github.com/whtsky/copilot2api/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/whtsky%2Fcopilot2api/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31409470,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-04T10:20:44.708Z","status":"ssl_error","status_checked_at":"2026-04-04T10:20:06.846Z","response_time":60,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["anthropic-api","api-proxy","claude","copilot","github-copilot","go","llm","openai-api"],"created_at":"2026-04-04T18:52:10.701Z","updated_at":"2026-04-04T18:52:12.593Z","avatar_url":"https://github.com/whtsky.png","language":"Go","readme":"# copilot2api\n\nA lightweight Go proxy that exposes GitHub Copilot as OpenAI-compatible, Anthropic-compatible, Gemini-compatible, and AmpCode-compatible API endpoints.\n\n## Features\n\n- **OpenAI API Compatible**: `/v1/chat/completions`, `/v1/models`, `/v1/embeddings`, `/v1/responses`\n- **Embeddings Support**: Native OpenAI-compatible `/v1/embeddings` endpoint\n- **Anthropic API Compatible**: `/v1/messages`\n- **Gemini API Compatible**: `/v1beta/models`, `/v1beta/models/{model}:generateContent`, `/v1beta/models/{model}:streamGenerateContent`, `/v1beta/models/{model}:countTokens`\n- **AmpCode Compatible**: `/amp/v1/*` routes for chat, `/api/provider/*` for provider-specific calls, management proxied to `ampcode.com`\n- **Streaming Support**: Full SSE streaming for both OpenAI and Anthropic formats\n- **Anthropic Routing**: Uses native `/v1/messages` when the model supports it, otherwise routes via `/responses` or `/chat/completions`\n- **Auto Authentication**: GitHub Device Flow OAuth with automatic token refresh\n- **Usage Monitoring**: Built-in `/usage` endpoint for quota tracking\n- **Models Cache**: 5-minute cache for `/v1/models` and Anthropic model capability lookups\n\n## Quick Start\n\n```bash\n# Build from source (requires Go 1.26+)\ngo build -o copilot2api .\n\n# Start the proxy\n./copilot2api\n```\n\nFirst run will prompt GitHub Device Flow authentication:\n\n```\n🔐 GitHub Authentication Required\nPlease visit: https://github.com/login/device\nEnter code: XXXX-XXXX\n\nWaiting for authorization...\n✅ Authentication successful!\n```\n\nServer starts on `http://127.0.0.1:7777` by default.\n\n## Security\n\n⚠️ **This proxy is designed for local development only.**\n\n- Does **not** implement API key validation — any request is accepted\n- Do not expose publicly — it becomes an open proxy consuming your Copilot quota\n- Credentials are stored in `~/.config/copilot2api/credentials.json`\n\n## Usage with Claude Code\n\nAdd to `~/.claude/settings.json`:\n\n```json\n{\n  \"env\": {\n    \"ANTHROPIC_BASE_URL\": \"http://127.0.0.1:7777\",\n    \"ANTHROPIC_API_KEY\": \"dummy\",\n    \"ANTHROPIC_MODEL\": \"claude-opus-4.6\",\n    \"ANTHROPIC_SMALL_FAST_MODEL\": \"claude-haiku-4.5\",\n    \"CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC\": \"1\"\n  },\n  \"permissions\": {\n    \"deny\": [\n      \"WebSearch\"\n    ]\n  }\n}\n```\n\n### 1M Context Window\n\ncopilot2api supports Claude 1M context models. When Claude Code sends the `anthropic-beta: context-1m-...` header, the proxy automatically appends `-1m` to the model ID (e.g. `claude-opus-4.6` → `claude-opus-4.6-1m`) so Copilot routes to the 1M variant.\n\nTo use it, select the 1M model variant in Claude Code via the `/model` command (e.g. `Opus (1M)`). Without this, Claude Code defaults to the standard 200K context window.\n\n## Usage with Codex\n\nAdd to `~/.codex/config.toml`:\n\n```toml\nmodel = \"gpt-5.3-codex\"\nmodel_provider = \"copilot2api\"\nmodel_reasoning_effort = \"high\"\nweb_search = \"disabled\"\n\n[model_providers.copilot2api]\nname = \"copilot2api\"\nbase_url = \"http://127.0.0.1:7777/v1\"\nwire_api = \"responses\"\napi_key = \"dummy\"\n```\n\n## Usage with Gemini CLI\n\nAdd to `~/.gemini/.env`:\n\n```env\nGOOGLE_GEMINI_BASE_URL=http://127.0.0.1:7777\nGEMINI_API_KEY=dummy\nGEMINI_MODEL=claude-opus-4.6-1m\n```\n\n## Usage with AmpCode\n\nSet the `AMP_URL` environment variable to point at copilot2api:\n\n```bash\nAMP_URL=http://127.0.0.1:7777/amp amp\n```\n\nOr add to `~/.config/amp/settings.json`:\n\n```json\n{\n  \"amp.url\": \"http://127.0.0.1:7777/amp\"\n}\n```\n\nChat completions, tool calls, and image input all route through Copilot API. Login and management routes (threads, telemetry) are proxied to `ampcode.com` — a free amp account is required for authentication.\n\n## Usage with curl\n\n```bash\n# OpenAI chat completion\ncurl http://localhost:7777/v1/chat/completions \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\"model\":\"gpt-5.3-codex\",\"messages\":[{\"role\":\"user\",\"content\":\"Hello!\"}]}'\n\n# Anthropic message\ncurl http://localhost:7777/v1/messages \\\n  -H \"Content-Type: application/json\" \\\n  -H \"x-api-key: dummy\" \\\n  -d '{\"model\":\"claude-sonnet-4.6\",\"messages\":[{\"role\":\"user\",\"content\":\"Hello!\"}],\"max_tokens\":100}'\n\n# List models\ncurl http://localhost:7777/v1/models\n\n# Check usage/quota\ncurl http://localhost:7777/usage\n```\n\n\u003cdetails\u003e\n\u003csummary\u003eUsage with SDKs\u003c/summary\u003e\n\n### OpenAI Python SDK\n\n```python\nimport openai\n\nclient = openai.OpenAI(\n    api_key=\"dummy\",\n    base_url=\"http://127.0.0.1:7777/v1\"\n)\n\nresponse = client.chat.completions.create(\n    model=\"gpt-5.3-codex\",\n    messages=[{\"role\": \"user\", \"content\": \"Hello!\"}]\n)\n```\n\n### Anthropic Python SDK\n\n```python\nimport anthropic\n\nclient = anthropic.Anthropic(\n    api_key=\"dummy\",\n    base_url=\"http://127.0.0.1:7777\"\n)\n\nmessage = client.messages.create(\n    model=\"claude-sonnet-4.6\",\n    max_tokens=1024,\n    messages=[{\"role\": \"user\", \"content\": \"Hello!\"}]\n)\n```\n\n\u003c/details\u003e\n\n## API Endpoints\n\n| Endpoint | Method | Description |\n|----------|--------|-------------|\n| `/v1/chat/completions` | POST | OpenAI Chat Completions (streaming \u0026 non-streaming) |\n| `/v1/responses` | POST | OpenAI Responses API |\n| `/v1/models` | GET | List available models (5min cache) |\n| `/v1/embeddings` | POST | Generate embeddings (string or array input) |\n| `/v1/messages` | POST | Anthropic Messages API (streaming \u0026 non-streaming) |\n| `/v1beta/models` | GET | List Gemini-compatible models |\n| `/v1beta/models/{model}:generateContent` | POST | Gemini Generate Content |\n| `/v1beta/models/{model}:streamGenerateContent` | POST | Gemini Generate Content streaming SSE |\n| `/v1beta/models/{model}:countTokens` | POST | Gemini token counting estimate |\n| `/amp/v1/chat/completions` | POST | AmpCode chat completions (via Copilot API) |\n| `/amp/v1/models` | GET | AmpCode model listing |\n| `/api/provider/*` | POST | AmpCode provider-specific routes |\n| `/api/*` | ANY | AmpCode management proxy to ampcode.com |\n| `/usage` | GET | Copilot usage and quota info |\n\n## Configuration\n\n### CLI Flags\n\n```\n./copilot2api [options]\n\n  -host string       Server host (default \"127.0.0.1\")\n  -port int          Server port (default 7777)\n  -token-dir string  Token storage directory (default ~/.config/copilot2api)\n  -debug             Enable debug logging\n  -version           Show version and exit\n```\n\n### Environment Variables\n\nEnvironment variables are used as defaults when flags are not provided:\n\n| Variable | Description | Default |\n|----------|-------------|---------|\n| `COPILOT2API_HOST` | Server host | `127.0.0.1` |\n| `COPILOT2API_PORT` | Server port | `7777` |\n| `COPILOT2API_TOKEN_DIR` | Token storage directory | `~/.config/copilot2api` |\n| `COPILOT2API_DEBUG` | Enable debug logging (`true`/`false`, `1`/`0`) | `false` |\n\nCLI flags take precedence over environment variables.\n\n## Docker\n\n```bash\ndocker run -it -p 7777:7777 \\\n  -v ~/.config/copilot2api:/root/.config/copilot2api \\\n  ghcr.io/whtsky/copilot2api\n```\n\nThe Docker image defaults to `COPILOT2API_HOST=0.0.0.0` so port forwarding works out of the box. The volume mount persists your GitHub credentials across container restarts. First run will prompt Device Flow authentication.\n\nTo use a custom port:\n\n```bash\ndocker run -it -p 8080:8080 \\\n  -v ~/.config/copilot2api:/root/.config/copilot2api \\\n  -e COPILOT2API_PORT=8080 \\\n  ghcr.io/whtsky/copilot2api\n```\n\n\u003e ⚠️ The Docker image listens on all interfaces by default. Only publish the port to `127.0.0.1` (e.g. `-p 127.0.0.1:7777:7777`) unless you know what you're doing.\n\n## How It Works\n\n1. Authenticates with GitHub via Device Flow OAuth\n2. Exchanges GitHub token for Copilot API token (auto-refreshes)\n3. Proxies OpenAI-format requests directly to Copilot API\n4. Routes Anthropic Messages requests by model capabilities (native `/v1/messages`, translated `/responses`, or translated `/chat/completions`)\n5. Automatically detects API endpoint from token (Individual/Business/Enterprise)\n\n## Development\n\n```bash\ngo test ./...              # Run tests\ngo build -o copilot2api .  # Build\n```\n\n## License\n\nMIT\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fwhtsky%2Fcopilot2api","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fwhtsky%2Fcopilot2api","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fwhtsky%2Fcopilot2api/lists"}