https://github.com/skyline69/agcp
Extremely Lightweight Antigravity-Claude-Proxy
https://github.com/skyline69/agcp
anthropic api claude cli gemini google-cloud proxy rust terminal-ui
Last synced: 29 days ago
JSON representation
Extremely Lightweight Antigravity-Claude-Proxy
- Host: GitHub
- URL: https://github.com/skyline69/agcp
- Owner: skyline69
- License: mit
- Created: 2026-02-05T12:29:09.000Z (about 2 months ago)
- Default Branch: main
- Last Pushed: 2026-02-10T01:53:21.000Z (about 1 month ago)
- Last Synced: 2026-02-10T06:55:53.204Z (about 1 month ago)
- Topics: anthropic, api, claude, cli, gemini, google-cloud, proxy, rust, terminal-ui
- Language: Rust
- Homepage: http://dasguney.com/agcp/
- Size: 1.2 MB
- Stars: 2
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- Contributing: CONTRIBUTING.md
- Funding: .github/FUNDING.yml
- License: LICENSE
- Security: SECURITY.md
- Agents: AGENTS.md
Awesome Lists containing this project
README
Extremely Lightweight Antigravity-Claude-Proxy
A blazing-fast Rust proxy that translates Anthropic's Claude API to Google's Cloud Code API. Use Claude and Gemini models through a single Anthropic-compatible endpoint.
More screenshots
## Why AGCP?
- **Lightweight** - Single binary, minimal dependencies, ~3MB compiled
- **Fast** - Written in Rust with async I/O, handles concurrent requests efficiently
- **Simple** - Just `agcp login` and you're ready, no config files needed
- **Powerful** - Multi-account support, response caching, smart load balancing
## Features
- **Anthropic API Compatible** - Works with Claude Code, OpenCode, Cursor, Cline, and other Anthropic API clients
- **Multiple Models** - Access Claude (Opus, Sonnet) and Gemini (Flash, Pro) through a single endpoint
- **Multi-Account Support** - Rotate between multiple Google accounts with smart load balancing
- **Response Caching** - Cache non-streaming responses to reduce quota usage
- **Native Gemini Routes** - `/v1beta/models`, `:generateContent`, `:streamGenerateContent`, `:countTokens`
- **OpenAI Media Endpoints** - `/v1/images/generations`, `/v1/images/edits`, `/v1/images/variations`, `/v1/audio/transcriptions`
- **Warmup Interception** - Detects Claude Code warmup pings on Anthropic endpoints and answers locally (`X-Warmup-Intercepted: true`)
- **Model Detect API** - `POST /v1/models/detect` returns mapped model and capability hints for CLI/TUI integrations
- **Startup Warmup + Quota Refresh** - Optional warmup and periodic quota sync for smarter account selection
- **Interactive TUI** - Beautiful terminal UI for monitoring and configuration
- **Background Daemon** - Runs quietly in the background
- **Client Identity Camouflage** - Spoofs the full Electron client fingerprint (User-Agent, `X-Client-Name/Version`, `X-Machine-Id`, `X-VSCode-SessionId`) to match the official Antigravity desktop app
## Quick Start
```bash
# Install from source
git clone https://github.com/skyline69/agcp
cd agcp
cargo build --release
# Login with Google OAuth
./target/release/agcp login
# Start the proxy (runs as background daemon)
./target/release/agcp
# Configure your AI tool to use http://127.0.0.1:8080
```
## Installation
### Homebrew (macOS/Linux)
```bash
brew tap skyline69/agcp
brew install agcp
```
### APT (Debian/Ubuntu)
```bash
# Add the GPG key and repository
curl -fsSL https://dasguney.com/apt/public.key | sudo gpg --dearmor -o /usr/share/keyrings/agcp.gpg
echo "deb [signed-by=/usr/share/keyrings/agcp.gpg] https://dasguney.com/apt stable main" | sudo tee /etc/apt/sources.list.d/agcp.list
# Install
sudo apt update
sudo apt install agcp
```
### DNF (Fedora/RHEL)
```bash
# Add the repository
sudo tee /etc/yum.repos.d/agcp.repo << 'EOF'
[agcp]
name=AGCP
baseurl=https://dasguney.com/rpm/packages
enabled=1
gpgcheck=1
gpgkey=https://dasguney.com/rpm/public.key
EOF
# Install
sudo dnf install agcp
```
### AUR (Arch Linux)
```bash
# With an AUR helper (e.g. yay, paru)
yay -S agcp-bin
# Or manually
git clone https://aur.archlinux.org/agcp-bin.git
cd agcp-bin
makepkg -si
```
### Nix
```bash
# Run directly
nix run github:skyline69/agcp
# Or install into profile
nix profile install github:skyline69/agcp
```
### From Source
```bash
git clone https://github.com/skyline69/agcp
cd agcp
cargo build --release
# Optional: Install to PATH
cp target/release/agcp ~/.local/bin/
```
### Shell Completions
```bash
# Bash
eval "$(agcp completions bash)"
# Zsh
eval "$(agcp completions zsh)"
# Fish
agcp completions fish > ~/.config/fish/completions/agcp.fish
```
## Usage
### Commands
| Command | Description |
|---------|-------------|
| `agcp` | Start the proxy server (daemon mode) |
| `agcp login` | Authenticate with Google OAuth |
| `agcp setup` | Configure AI tools to use AGCP |
| `agcp tui` | Launch interactive terminal UI |
| `agcp status` | Check if server is running |
| `agcp stop` | Stop the background server |
| `agcp restart` | Restart the background server |
| `agcp logs` | View server logs (follows by default) |
| `agcp config` | Show current configuration |
| `agcp accounts` | Manage multiple accounts |
| `agcp doctor` | Check configuration and connectivity |
| `agcp quota` | Show model quota usage |
| `agcp stats` | Show request statistics |
| `agcp test` | Verify setup works end-to-end |
### CLI Options
```bash
agcp [OPTIONS]
Options:
-p, --port Port to listen on (default: 8080)
--host Host to bind to (default: 127.0.0.1)
--network Listen on all interfaces (LAN access)
-f, --foreground Run in foreground instead of daemon mode
-d, --debug Enable debug logging
--fallback Enable model fallback on quota exhaustion
-h, --help Show help
-V, --version Show version
```
## Interactive TUI
AGCP includes a terminal UI for monitoring and configuration (`agcp tui`):
Features:
- **Overview** - Real-time request rate, response times, account status
- **Logs** - Syntax-highlighted log viewer with scrolling
- **Accounts** - Manage and monitor account quota (search with `/`, sort with `s`)
- **Config** - Edit configuration interactively
- **Mappings** - Configure model name mappings with presets and glob rules
- **Quota** - Visual quota usage with donut charts
## Model Aliases
For convenience, you can use these short aliases:
| Alias | Model |
|-------|-------|
| `opus` | claude-opus-4-6-thinking |
| `sonnet` | claude-sonnet-4-6 |
| `sonnet-thinking` | claude-sonnet-4-6 |
| `flash` | gemini-3-flash |
| `pro` | gemini-3-pro-high |
| `gpt-oss` | gpt-oss-120b-medium |
## Supported Models
### Claude Models
- `claude-opus-4-6-thinking`
- `claude-sonnet-4-6`
### Gemini Models
- `gemini-3-flash`
- `gemini-3-pro-high`
- `gemini-3-pro-low`
### Other Models
- `gpt-oss-120b-medium`
## Configuration
AGCP uses a TOML configuration file at `~/.config/agcp/config.toml`:
```toml
[server]
port = 8080
host = "127.0.0.1"
# api_key = "your-optional-api-key"
request_timeout_secs = 300 # Per-request timeout (default: 5 minutes)
warmup_intercept_enabled = true # Intercept warmup pings on /v1/messages
warmup_intercept_max_text_len = 100
[logging]
debug = false
log_requests = false
[accounts]
strategy = "hybrid" # "sticky", "roundrobin", or "hybrid"
quota_threshold = 0.1 # Deprioritize accounts below 10% quota
fallback = false
warmup_on_startup = true
warmup_model = "gemini-3-flash"
quota_refresh_interval_secs = 900 # 0 = disabled
[cache]
enabled = true
ttl_seconds = 300
max_entries = 100
[cloudcode]
timeout_secs = 120
max_retries = 5
max_concurrent_requests = 1 # Max parallel requests to Cloud Code API
min_request_interval_ms = 500 # Minimum delay between requests (ms)
```
### Account Selection Strategies
- **`sticky`** - Use the same account until it hits quota limits
- **`roundrobin`** - Rotate through accounts evenly
- **`hybrid`** - Smart selection based on account health and quota (recommended)
## Multi-Account Management
AGCP supports multiple Google accounts for higher throughput:
```bash
# Add accounts
agcp login # Add first account
agcp login # Add another account
# View accounts
agcp accounts # List all accounts
# Manage accounts
agcp accounts disable # Disable an account
agcp accounts enable # Re-enable an account
agcp accounts remove # Remove an account
```
## API Endpoints
| Endpoint | Description |
|----------|-------------|
| `POST /v1/messages` | Anthropic Messages API (streaming and non-streaming) |
| `POST /v1/chat/completions` | OpenAI Chat Completions compatibility |
| `POST /v1/responses` | OpenAI Responses compatibility |
| `POST /v1/images/generations` | OpenAI image generation compatibility |
| `POST /v1/images/edits` | OpenAI image edit compatibility (multipart) |
| `POST /v1/images/variations` | OpenAI image variation compatibility (multipart) |
| `POST /v1/audio/transcriptions` | OpenAI audio transcription compatibility |
| `POST /v1/models/detect` | Detect mapped model and capability hints |
| `GET /v1beta/models` | Native Gemini model listing |
| `GET /v1beta/models/{model}` | Native Gemini model metadata |
| `POST /v1beta/models/{model}:countTokens` | Native Gemini token counting |
| `POST /v1beta/models/{model}:generateContent` | Native Gemini non-streaming generation |
| `POST /v1beta/models/{model}:streamGenerateContent` | Native Gemini streaming generation (SSE) |
| `POST /internal/warmup` | Trigger internal warmup and optional quota refresh |
| `GET /v1/models` | List available models |
| `GET /health` | Health check |
| `GET /stats` | Server and cache statistics |
### Gemini / Media API Examples
```bash
# Native Gemini: list models
curl -s http://127.0.0.1:8080/v1beta/models | jq
# Native Gemini: count tokens
curl -s http://127.0.0.1:8080/v1beta/models/gemini-3-flash:countTokens \
-H "Content-Type: application/json" \
-d '{"contents":[{"role":"user","parts":[{"text":"Hello"}]}]}'
# OpenAI Images compatibility
curl -s http://127.0.0.1:8080/v1/images/generations \
-H "Content-Type: application/json" \
-d '{"prompt":"a retro robot poster","response_format":"b64_json"}'
# OpenAI image edits compatibility
curl -s http://127.0.0.1:8080/v1/images/edits \
-F "image=@input.png" \
-F "prompt=add cinematic lighting" \
-F "response_format=b64_json"
# OpenAI image variations compatibility
curl -s http://127.0.0.1:8080/v1/images/variations \
-F "image=@input.png" \
-F "response_format=url"
# OpenAI Audio compatibility (multipart)
curl -s http://127.0.0.1:8080/v1/audio/transcriptions \
-F "file=@sample.wav" \
-F "model=gemini-3-flash" \
-F "response_format=json"
# Model detect helper
curl -s http://127.0.0.1:8080/v1/models/detect \
-H "Content-Type: application/json" \
-d '{"model":"flash"}'
```
## Response Caching
AGCP caches non-streaming responses to reduce API quota usage:
- Identical requests return cached responses instantly
- Streaming and thinking model responses are not cached
- Use `X-No-Cache: true` header to bypass cache
- Cache headers: `X-Cache: HIT`, `X-Cache: MISS`, `X-Cache: BYPASS`
## Configuring AI Tools
### Claude Code
```bash
agcp setup
```
Select "Claude Code" from the interactive menu, or manually add to `~/.claude/settings.json`:
```json
{
"apiBaseUrl": "http://127.0.0.1:8080"
}
```
### Other Tools
Point any Anthropic API-compatible tool to `http://127.0.0.1:8080/v1`.
## Troubleshooting
```bash
agcp doctor # Run diagnostic checks
agcp status # Quick status check
agcp logs # View logs
```
## Files
| Path | Description |
|------|-------------|
| `~/.config/agcp/config.toml` | Configuration file |
| `~/.config/agcp/accounts.json` | Account credentials |
| `~/.config/agcp/agcp.log` | Server logs |
| `~/.config/agcp/machineid` | Persistent machine UUID for client identity camouflage |
## License
MIT - See [LICENSE](LICENSE) for details.
---
Made with Rust