{"id":42307840,"url":"https://github.com/Fast-Editor/Lynkr","last_synced_at":"2026-02-18T09:01:18.969Z","repository":{"id":327493270,"uuid":"1109510791","full_name":"Fast-Editor/Lynkr","owner":"Fast-Editor","description":"Streamline your workflow with Lynkr, a CLI tool that acts as an HTTP proxy for efficient code interactions using Claude Code CLI.","archived":false,"fork":false,"pushed_at":"2026-02-12T12:00:34.000Z","size":1261,"stargazers_count":299,"open_issues_count":6,"forks_count":27,"subscribers_count":2,"default_branch":"main","last_synced_at":"2026-02-12T14:56:47.046Z","etag":null,"topics":["agents","ai","claude","claudecode","code-assistant","code-assistants","code-generation","databricks","developer-tools","llm","llmops","llms","mcp"],"latest_commit_sha":null,"homepage":"https://fast-editor.github.io/Lynkr/","language":"JavaScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Fast-Editor.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":"CITATIONS.bib","codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null},"funding":{"github":null,"patreon":null,"open_collective":null,"ko_fi":null,"tidelift":null,"community_bridge":null,"liberapay":null,"issuehunt":null,"lfx_crowdfunding":null,"polar":null,"buy_me_a_coffee":"srinivasveera","thanks_dev":null,"custom":null}},"created_at":"2025-12-03T22:50:02.000Z","updated_at":"2026-02-11T23:29:14.000Z","dependencies_parsed_at":"2025-12-30T09:02:02.058Z","dependency_job_id":null,"html_url":"https://github.com/Fast-Editor/Lynkr","commit_stats":null,"previous_names":["vishalveerareddy123/lynkr"],"tags_count":1,"template":false,"template_full_name":null,"purl":"pkg:github/Fast-Editor/Lynkr","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Fast-Editor%2FLynkr","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Fast-Editor%2FLynkr/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Fast-Editor%2FLynkr/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Fast-Editor%2FLynkr/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Fast-Editor","download_url":"https://codeload.github.com/Fast-Editor/Lynkr/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Fast-Editor%2FLynkr/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29574065,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-18T08:38:15.585Z","status":"ssl_error","status_checked_at":"2026-02-18T08:38:14.917Z","response_time":162,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agents","ai","claude","claudecode","code-assistant","code-assistants","code-generation","databricks","developer-tools","llm","llmops","llms","mcp"],"created_at":"2026-01-27T11:12:46.286Z","updated_at":"2026-02-18T09:01:18.963Z","avatar_url":"https://github.com/Fast-Editor.png","language":"JavaScript","readme":"# Lynkr - Run Cursor, Cline, Continue, OpenAi Compatible Tools and Claude Code on any model.\n## One universal LLM proxy for AI coding tools.\n\n[![npm version](https://img.shields.io/npm/v/lynkr.svg)](https://www.npmjs.com/package/lynkr)\n[![Homebrew Tap](https://img.shields.io/badge/homebrew-lynkr-brightgreen.svg)](https://github.com/vishalveerareddy123/homebrew-lynkr)\n[![License: Apache 2.0](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](LICENSE)\n[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/vishalveerareddy123/Lynkr)\n[![Databricks Supported](https://img.shields.io/badge/Databricks-Supported-orange)](https://www.databricks.com/)\n[![AWS Bedrock](https://img.shields.io/badge/AWS%20Bedrock-100%2B%20Models-FF9900)](https://aws.amazon.com/bedrock/)\n[![OpenAI Compatible](https://img.shields.io/badge/OpenAI-Compatible-412991)](https://openai.com/)\n[![Ollama Compatible](https://img.shields.io/badge/Ollama-Compatible-brightgreen)](https://ollama.ai/)\n[![llama.cpp Compatible](https://img.shields.io/badge/llama.cpp-Compatible-blue)](https://github.com/ggerganov/llama.cpp)\n\n### Use Case\n```\n        Cursor / Cline / Continue / Claude Code / Clawdbot / Codex/ KiloCode\n                        ↓\n                       Lynkr\n                        ↓\n        Local LLMs | OpenRouter | Azure | Databricks | AWS BedRock | Ollama | LMStudio | Gemini\n```\n---\n\n## Overview\n\nLynkr is a **self-hosted proxy server** that unlocks Claude Code CLI , Cursor IDE and Codex Cli by enabling:\n\n- 🚀 **Any LLM Provider** - Databricks, AWS Bedrock (100+ models), OpenRouter (100+ models), Ollama (local), llama.cpp, Azure OpenAI, Azure Anthropic, OpenAI, LM Studio\n- 💰 **60-80% Cost Reduction** - Built-in token optimization with smart tool selection, prompt caching, and memory deduplication\n- 🔒 **100% Local/Private** - Run completely offline with Ollama or llama.cpp\n- 🌐 **Remote or Local** - Connect to providers on any IP/hostname (not limited to localhost)\n- 🎯 **Zero Code Changes** - Drop-in replacement for Anthropic's backend\n- 🏢 **Enterprise-Ready** - Circuit breakers, load shedding, Prometheus metrics, health checks\n\n**Perfect for:**\n- Developers who want provider flexibility and cost control\n- Enterprises needing self-hosted AI with observability\n- Privacy-focused teams requiring local model execution\n- Teams seeking 60-80% cost reduction through optimization\n\n---\n\n## Quick Start\n\n### Installation\n\n**Option 1: NPM Package (Recommended)**\n```bash\n# Install globally\nnpm install -g pino-pretty \nnpm install -g lynkr\n\nlynk start\n```\n\n**Option 2: Git Clone**\n```bash\n# Clone repository\ngit clone https://github.com/vishalveerareddy123/Lynkr.git\ncd Lynkr\n\n# Install dependencies\nnpm install\n\n# Create .env from example\ncp .env.example .env\n\n# Edit .env with your provider credentials\nnano .env\n\n# Start server\nnpm start\n```\n\n**Node.js Compatibility:**\n- **Node 20-24**: Full support with all features\n- **Node 25+**: Full support (native modules auto-rebuild, babel fallback for code parsing)\n\n\n\n**Option 3: Docker**\n```bash\ndocker-compose up -d\n```\n\n---\n\n## Supported Providers\n\nLynkr supports **10+ LLM providers**:\n\n| Provider | Type | Models | Cost | Privacy |\n|----------|------|--------|------|---------|\n| **AWS Bedrock** | Cloud | 100+ (Claude, Titan, Llama, Mistral, etc.) | $$-$$$ | Cloud |\n| **Databricks** | Cloud | Claude Sonnet 4.5, Opus 4.5 | $$$ | Cloud |\n| **OpenRouter** | Cloud | 100+ (GPT, Claude, Llama, Gemini, etc.) | $-$$ | Cloud |\n| **Ollama** | Local | Unlimited (free, offline) | **FREE** | 🔒 100% Local |\n| **llama.cpp** | Local | GGUF models | **FREE** | 🔒 100% Local |\n| **Azure OpenAI** | Cloud | GPT-4o, GPT-5, o1, o3 | $$$ | Cloud |\n| **Azure Anthropic** | Cloud | Claude models | $$$ | Cloud |\n| **OpenAI** | Cloud | GPT-4o, o1, o3 | $$$ | Cloud |\n| **LM Studio** | Local | Local models with GUI | **FREE** | 🔒 100% Local |\n| **MLX OpenAI Server** | Local | Apple Silicon (M1/M2/M3/M4) | **FREE** | 🔒 100% Local |\n\n📖 **[Full Provider Configuration Guide](documentation/providers.md)**\n\n---\n\n## Claude Code Integration\n\nConfigure Claude Code CLI to use Lynkr:\n\n```bash\n# Set Lynkr as backend\nexport ANTHROPIC_BASE_URL=http://localhost:8081\nexport ANTHROPIC_API_KEY=dummy\n\n# Run Claude Code\nclaude \"Your prompt here\"\n```\n\nThat's it! Claude Code now uses your configured provider.\n\n📖 **[Detailed Claude Code Setup](documentation/claude-code-cli.md)**\n\n---\n\n## Cursor Integration\n\nConfigure Cursor IDE to use Lynkr:\n\n1. **Open Cursor Settings**\n   - Mac: `Cmd+,` | Windows/Linux: `Ctrl+,`\n   - Navigate to: **Features** → **Models**\n\n2. **Configure OpenAI API Settings**\n   - **API Key**: `sk-lynkr` (any non-empty value)\n   - **Base URL**: `http://localhost:8081/v1`\n   - **Model**: `claude-3.5-sonnet` (or your provider's model)\n\n3. **Test It**\n   - Chat: `Cmd+L` / `Ctrl+L`\n   - Inline edits: `Cmd+K` / `Ctrl+K`\n   - @Codebase search: Requires [embeddings setup](documentation/embeddings.md)\n\n📖 **[Full Cursor Setup Guide](documentation/cursor-integration.md)** | **[Embeddings Configuration](documentation/embeddings.md)**\n---\n## Codex CLI Integration\n\nConfigure [OpenAI Codex CLI](https://github.com/openai/codex) to use Lynkr as its backend.\n\n### Option 1: Environment Variables (Quick Start)\n\n```bash\nexport OPENAI_BASE_URL=http://localhost:8081/v1\nexport OPENAI_API_KEY=dummy\n\ncodex\n```\n\n### Option 2: Config File (Recommended)\n\nEdit `~/.codex/config.toml`:\n\n```toml\n# Set Lynkr as the default provider\nmodel_provider = \"lynkr\"\nmodel = \"gpt-4o\"\n\n# Define the Lynkr provider\n[model_providers.lynkr]\nname = \"Lynkr Proxy\"\nbase_url = \"http://localhost:8081/v1\"\nwire_api = \"responses\"\n\n# Optional: Trust your project directories\n[projects.\"/path/to/your/project\"]\ntrust_level = \"trusted\"\n```\n\n### Configuration Options\n\n| Option | Description | Example |\n|--------|-------------|---------|\n| `model_provider` | Active provider name | `\"lynkr\"` |\n| `model` | Model to request (mapped by Lynkr) | `\"gpt-4o\"`, `\"claude-sonnet-4-5\"` |\n| `base_url` | Lynkr endpoint | `\"http://localhost:8081/v1\"` |\n| `wire_api` | API format (`responses` or `chat`) | `\"responses\"` |\n| `trust_level` | Project trust (`trusted`, `sandboxed`) | `\"trusted\"` |\n\n### Remote Lynkr Server\n\nTo connect Codex to a remote Lynkr instance:\n\n```toml\n[model_providers.lynkr-remote]\nname = \"Remote Lynkr\"\nbase_url = \"http://192.168.1.100:8081/v1\"\nwire_api = \"responses\"\n```\n\n### Troubleshooting\n\n| Issue | Solution |\n|-------|----------|\n| Same response for all queries | Disable semantic cache: `SEMANTIC_CACHE_ENABLED=false` |\n| Tool calls not executing | Increase threshold: `POLICY_TOOL_LOOP_THRESHOLD=15` |\n| Slow first request | Keep Ollama loaded: `OLLAMA_KEEP_ALIVE=24h` |\n| Connection refused | Ensure Lynkr is running: `npm start` |\n\n\u003e **Note:** Codex uses the OpenAI Responses API format. Lynkr automatically converts this to your configured provider's format.\n\n---\n\n## ClawdBot Integration\n\nLynkr supports [ClawdBot](https://github.com/openclaw/openclaw) via its OpenAI-compatible API. ClawdBot users can route requests through Lynkr to access any supported provider.\n\n**Configuration in ClawdBot:**\n| Setting | Value |\n|---------|-------|\n| Model/auth provider | `Copilot` |\n| Copilot auth method | `Copilot Proxy (local)` |\n| Copilot Proxy base URL | `http://localhost:8081/v1` |\n| Model IDs | Any model your Lynkr provider supports |\n\n**Available models** (depending on your Lynkr provider):\n`gpt-5.2`, `gpt-5.1-codex`, `claude-opus-4.5`, `claude-sonnet-4.5`, `claude-haiku-4.5`, `gemini-3-pro`, `gemini-3-flash`, and more.\n\n\u003e 🌐 **Remote Support**: ClawdBot can connect to Lynkr on any machine - use any IP/hostname in the Proxy base URL (e.g., `http://192.168.1.100:8081/v1` or `http://gpu-server:8081/v1`).\n\n---\n\n## Lynkr also supports  Cline, Continue.dev and other OpenAI compatible tools.\n---\n\n## Documentation\n\n### Getting Started\n- 📦 **[Installation Guide](documentation/installation.md)** - Detailed installation for all methods\n- ⚙️ **[Provider Configuration](documentation/providers.md)** - Complete setup for all 9+ providers\n- 🎯 **[Quick Start Examples](documentation/installation.md#quick-start-examples)** - Copy-paste configs\n\n### IDE \u0026 CLI Integration\n- 🖥️ **[Claude Code CLI Setup](documentation/claude-code-cli.md)** - Connect Claude Code CLI\n- 🤖 **[Codex CLI Setup](documentation/codex-cli.md)** - Configure OpenAI Codex CLI with config.toml\n- 🎨 **[Cursor IDE Setup](documentation/cursor-integration.md)** - Full Cursor integration with troubleshooting\n- 🔍 **[Embeddings Guide](documentation/embeddings.md)** - Enable @Codebase semantic search (4 options: Ollama, llama.cpp, OpenRouter, OpenAI)\n\n### Features \u0026 Capabilities\n- ✨ **[Core Features](documentation/features.md)** - Architecture, request flow, format conversion\n- 🧠 **[Memory System](documentation/memory-system.md)** - Titans-inspired long-term memory\n- 🗃️ **[Semantic Cache](#semantic-cache)** - Cache responses for similar prompts\n- 💰 **[Token Optimization](documentation/token-optimization.md)** - 60-80% cost reduction strategies\n- 🔧 **[Tools \u0026 Execution](documentation/tools.md)** - Tool calling, execution modes, custom tools\n\n### Deployment \u0026 Operations\n- 🐳 **[Docker Deployment](documentation/docker.md)** - docker-compose setup with GPU support\n- 🏭 **[Production Hardening](documentation/production.md)** - Circuit breakers, load shedding, metrics\n- 📊 **[API Reference](documentation/api.md)** - All endpoints and formats\n\n### Support\n- 🔧 **[Troubleshooting](documentation/troubleshooting.md)** - Common issues and solutions\n- ❓ **[FAQ](documentation/faq.md)** - Frequently asked questions\n- 🧪 **[Testing Guide](documentation/testing.md)** - Running tests and validation\n\n---\n\n## External Resources\n\n- 📚 **[DeepWiki Documentation](https://deepwiki.com/vishalveerareddy123/Lynkr)** - AI-powered documentation search\n- 💬 **[GitHub Discussions](https://github.com/vishalveerareddy123/Lynkr/discussions)** - Community Q\u0026A\n- 🐛 **[Report Issues](https://github.com/vishalveerareddy123/Lynkr/issues)** - Bug reports and feature requests\n- 📦 **[NPM Package](https://www.npmjs.com/package/lynkr)** - Official npm package\n\n---\n\n## Key Features Highlights\n\n- ✅ **Multi-Provider Support** - 9+ providers including local (Ollama, llama.cpp) and cloud (Bedrock, Databricks, OpenRouter)\n- ✅ **60-80% Cost Reduction** - Token optimization with smart tool selection, prompt caching, memory deduplication\n- ✅ **100% Local Option** - Run completely offline with Ollama/llama.cpp (zero cloud dependencies)\n- ✅ **OpenAI Compatible** - Works with Cursor IDE, Continue.dev, and any OpenAI-compatible client\n- ✅ **Embeddings Support** - 4 options for @Codebase search: Ollama (local), llama.cpp (local), OpenRouter, OpenAI\n- ✅ **MCP Integration** - Automatic Model Context Protocol server discovery and orchestration\n- ✅ **Enterprise Features** - Circuit breakers, load shedding, Prometheus metrics, K8s health checks\n- ✅ **Streaming Support** - Real-time token streaming for all providers\n- ✅ **Memory System** - Titans-inspired long-term memory with surprise-based filtering\n- ✅ **Tool Calling** - Full tool support with server and passthrough execution modes\n- ✅ **Production Ready** - Battle-tested with 400+ tests, observability, and error resilience\n- ✅ **Node 20-25 Support** - Works with latest Node.js versions including v25\n- ✅ **Semantic Caching** - Cache responses for similar prompts (requires embeddings)\n\n---\n\n## Semantic Cache\n\nLynkr includes an optional semantic response cache that returns cached responses for semantically similar prompts, reducing latency and costs.\n\n**Enable Semantic Cache:**\n```bash\n# Requires an embeddings provider (Ollama recommended)\nollama pull nomic-embed-text\n\n# Add to .env\nSEMANTIC_CACHE_ENABLED=true\nSEMANTIC_CACHE_THRESHOLD=0.95\nOLLAMA_EMBEDDINGS_MODEL=nomic-embed-text\nOLLAMA_EMBEDDINGS_ENDPOINT=http://localhost:11434/api/embeddings\n```\n\n| Setting | Default | Description |\n|---------|---------|-------------|\n| `SEMANTIC_CACHE_ENABLED` | `false` | Enable/disable semantic caching |\n| `SEMANTIC_CACHE_THRESHOLD` | `0.95` | Similarity threshold (0.0-1.0) |\n\n\u003e **Note:** Without a proper embeddings provider, the cache uses hash-based fallback which may cause false matches. Use Ollama with `nomic-embed-text` for best results.\n\n---\n\n## Architecture\n\n```\n┌─────────────────┐\n│    AI Tools     │  \n└────────┬────────┘\n         │ Anthropic/OpenAI Format\n         ↓\n┌─────────────────┐\n│  Lynkr Proxy    │\n│  Port: 8081     │\n│                 │\n│ • Format Conv.  │\n│ • Token Optim.  │\n│ • Provider Route│\n│ • Tool Calling  │\n│ • Caching       │\n└────────┬────────┘\n         │\n         ├──→ Databricks (Claude 4.5)\n         ├──→ AWS Bedrock (100+ models)\n         ├──→ OpenRouter (100+ models)\n         ├──→ Ollama (local, free)\n         ├──→ llama.cpp (local, free)\n         ├──→ Azure OpenAI (GPT-4o, o1)\n         ├──→ OpenAI (GPT-4o, o3)\n         └──→ Azure Anthropic (Claude)\n```\n\n📖 **[Detailed Architecture](documentation/features.md#architecture)**\n\n---\n\n## Quick Configuration Examples\n\n**100% Local (FREE)**\n```bash\nexport MODEL_PROVIDER=ollama\nexport OLLAMA_MODEL=qwen2.5-coder:latest\nexport OLLAMA_EMBEDDINGS_MODEL=nomic-embed-text\nnpm start\n```\n\u003e 💡 **Tip:** Prevent slow cold starts by keeping Ollama models loaded: `launchctl setenv OLLAMA_KEEP_ALIVE \"24h\"` (macOS) or set `OLLAMA_KEEP_ALIVE=24h` env var. See [troubleshooting](documentation/troubleshooting.md#slow-first-request--cold-start-warning).\n\n**Remote Ollama (GPU Server)**\n```bash\nexport MODEL_PROVIDER=ollama\nexport OLLAMA_ENDPOINT=http://192.168.1.100:11434  # Any IP or hostname\nexport OLLAMA_MODEL=llama3.1:70b\nnpm start\n```\n\u003e 🌐 **Note:** All provider endpoints support remote addresses - not limited to localhost. Use any IP, hostname, or domain.\n\n**MLX OpenAI Server (Apple Silicon)**\n```bash\n# Terminal 1: Start MLX server\nmlx-openai-server launch --model-path mlx-community/Qwen2.5-Coder-7B-Instruct-4bit --model-type lm\n\n# Terminal 2: Start Lynkr\nexport MODEL_PROVIDER=openai\nexport OPENAI_ENDPOINT=http://localhost:8000/v1/chat/completions\nexport OPENAI_API_KEY=not-needed\nnpm start\n```\n\u003e 🍎 **Apple Silicon optimized** - Native MLX performance on M1/M2/M3/M4 Macs. See [MLX setup guide](documentation/providers.md#10-mlx-openai-server-apple-silicon).\n\n**AWS Bedrock (100+ models)**\n```bash\nexport MODEL_PROVIDER=bedrock\nexport AWS_BEDROCK_API_KEY=your-key\nexport AWS_BEDROCK_MODEL_ID=anthropic.claude-3-5-sonnet-20241022-v2:0\nnpm start\n```\n\n**OpenRouter (simplest cloud)**\n```bash\nexport MODEL_PROVIDER=openrouter\nexport OPENROUTER_API_KEY=sk-or-v1-your-key\nnpm start\n```\n** You can setup multiple models like local models\n📖 **[More Examples](documentation/providers.md#quick-start-examples)**\n\n---\n\n## Contributing\n\nWe welcome contributions! Please see:\n- **[Contributing Guide](documentation/contributing.md)** - How to contribute\n- **[Testing Guide](documentation/testing.md)** - Running tests\n\n---\n\n## License\n\nApache 2.0 - See [LICENSE](LICENSE) file for details.\n\n---\n\n## Community \u0026 Support\n\n- ⭐ **Star this repo** if Lynkr helps you!\n- 💬 **[Join Discussions](https://github.com/vishalveerareddy123/Lynkr/discussions)** - Ask questions, share tips\n- 🐛 **[Report Issues](https://github.com/vishalveerareddy123/Lynkr/issues)** - Bug reports welcome\n- 📖 **[Read the Docs](documentation/)** - Comprehensive guides\n\n---\n\n**Made with ❤️ by developers, for developers.**\n","funding_links":["https://buymeacoffee.com/srinivasveera"],"categories":["JavaScript","MCP Servers","Learning"],"sub_categories":["Other MCP Servers","Repositories"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FFast-Editor%2FLynkr","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FFast-Editor%2FLynkr","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FFast-Editor%2FLynkr/lists"}