{"id":48760703,"url":"https://github.com/ForLoopCodes/contextplus","last_synced_at":"2026-04-22T10:00:43.570Z","repository":{"id":340998898,"uuid":"1168456418","full_name":"ForLoopCodes/contextplus","owner":"ForLoopCodes","description":"Semantic Intelligence for Large-Scale Engineering. Context+ is an MCP server designed for developers who demand 99% accuracy. By combining RAG, Tree-sitter AST, Spectral Clustering, and Obsidian-style linking, Context+ turns a massive codebase into a searchable, hierarchical feature graph.","archived":false,"fork":false,"pushed_at":"2026-04-06T18:35:36.000Z","size":388,"stargazers_count":1727,"open_issues_count":2,"forks_count":134,"subscribers_count":9,"default_branch":"main","last_synced_at":"2026-04-06T20:25:30.799Z","etag":null,"topics":["mcp-server"],"latest_commit_sha":null,"homepage":"https://contextplus.vercel.app","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/ForLoopCodes.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-02-27T12:12:20.000Z","updated_at":"2026-04-06T19:22:13.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/ForLoopCodes/contextplus","commit_stats":null,"previous_names":["forloopcodes/contextual","forloopcodes/contextplus"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/ForLoopCodes/contextplus","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ForLoopCodes%2Fcontextplus","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ForLoopCodes%2Fcontextplus/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ForLoopCodes%2Fcontextplus/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ForLoopCodes%2Fcontextplus/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/ForLoopCodes","download_url":"https://codeload.github.com/ForLoopCodes/contextplus/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ForLoopCodes%2Fcontextplus/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":32130776,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-22T08:34:57.708Z","status":"ssl_error","status_checked_at":"2026-04-22T08:34:55.583Z","response_time":58,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["mcp-server"],"created_at":"2026-04-13T06:00:40.905Z","updated_at":"2026-04-22T10:00:43.565Z","avatar_url":"https://github.com/ForLoopCodes.png","language":"TypeScript","funding_links":[],"categories":["\u003ca name=\"TypeScript\"\u003e\u003c/a\u003eTypeScript","MCP Servers \u0026 Protocol"],"sub_categories":[],"readme":"# Context+\n\nSemantic Intelligence for Large-Scale Engineering.\n\nContext+ is an MCP server designed for developers who demand 99% accuracy. By combining RAG, Tree-sitter AST, Spectral Clustering, and Obsidian-style linking, Context+ turns a massive codebase into a searchable, hierarchical feature graph.\n\n**While you're here, check out my other project Airena. Curate a team of AI agents and face head-to-head with other orchestrators. First place on the leaderboard gets a $1600 prize!**\n\nhttps://github.com/user-attachments/assets/a97a451f-c9b4-468d-b036-15b65fc13e79\n\n## Tools\n\n### Discovery\n\n| Tool                         | Description                                                                                                                                                      |\n| ---------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `get_context_tree`           | Structural AST tree of a project with file headers and symbol ranges (line numbers for functions/classes/methods). Dynamic pruning shrinks output automatically. |\n| `get_file_skeleton`          | Function signatures, class methods, and type definitions with line ranges, without reading full bodies. Shows the API surface.                                   |\n| `semantic_code_search`       | Search by meaning, not exact text. Uses embeddings over file headers/symbols and returns matched symbol definition lines.                                        |\n| `semantic_identifier_search` | Identifier-level semantic retrieval for functions/classes/variables with ranked call sites and line numbers.                                                     |\n| `semantic_navigate`          | Browse codebase by meaning using spectral clustering. Groups semantically related files into labeled clusters.                                                   |\n\n### Analysis\n\n| Tool                  | Description                                                                                                                   |\n| --------------------- | ----------------------------------------------------------------------------------------------------------------------------- |\n| `get_blast_radius`    | Trace every file and line where a symbol is imported or used. Prevents orphaned references.                                   |\n| `run_static_analysis` | Run native linters and compilers to find unused variables, dead code, and type errors. Supports TypeScript, Python, Rust, Go. |\n\n### Code Ops\n\n| Tool              | Description                                                                                                              |\n| ----------------- | ------------------------------------------------------------------------------------------------------------------------ |\n| `propose_commit`  | The only way to write code. Validates against strict rules before saving. Creates a shadow restore point before writing. |\n| `get_feature_hub` | Obsidian-style feature hub navigator. Hubs are `.md` files with `[[wikilinks]]` that map features to code files.         |\n\n### Version Control\n\n| Tool                  | Description                                                                                                |\n| --------------------- | ---------------------------------------------------------------------------------------------------------- |\n| `list_restore_points` | List all shadow restore points created by `propose_commit`. Each captures file state before AI changes.    |\n| `undo_change`         | Restore files to their state before a specific AI change. Uses shadow restore points. Does not affect git. |\n\n### Memory \u0026 RAG\n\n| Tool                      | Description                                                                                              |\n| ------------------------- | -------------------------------------------------------------------------------------------------------- |\n| `upsert_memory_node`      | Create or update a memory node (concept, file, symbol, note) with auto-generated embeddings.             |\n| `create_relation`         | Create typed edges between nodes (relates_to, depends_on, implements, references, similar_to, contains). |\n| `search_memory_graph`     | Semantic search with graph traversal — finds direct matches then walks 1st/2nd-degree neighbors.         |\n| `prune_stale_links`       | Remove decayed edges (e^(-λt) below threshold) and orphan nodes with low access counts.                  |\n| `add_interlinked_context` | Bulk-add nodes with auto-similarity linking (cosine ≥ 0.72 creates edges automatically).                 |\n| `retrieve_with_traversal` | Start from a node and walk outward — returns all reachable neighbors scored by decay and depth.          |\n\n\u003e **Complementary server:** [pmll-memory-mcp](https://www.npmjs.com/package/pmll-memory-mcp) (`npx pmll-memory-mcp`) is a separate MCP server by [@drQedwards](https://github.com/drQedwards) that adapts Context+'s long-term memory graph and adds short-term KV context memory, Q-promise deduplication, and a solution engine on top. See [drQedwards/PPM](https://github.com/drQedwards/PPM) for details.\n\n## Setup\n\n### Quick Start (npx / bunx)\n\nNo installation needed. Add Context+ to your IDE MCP config.\n\nFor Claude Code, Cursor, and Windsurf, use `mcpServers`:\n\n```json\n{\n  \"mcpServers\": {\n    \"contextplus\": {\n      \"command\": \"bunx\",\n      \"args\": [\"contextplus\"],\n      \"env\": {\n        \"OLLAMA_EMBED_MODEL\": \"nomic-embed-text\",\n        \"OLLAMA_CHAT_MODEL\": \"gemma2:27b\",\n        \"OLLAMA_API_KEY\": \"YOUR_OLLAMA_API_KEY\"\n      }\n    }\n  }\n}\n```\n\nFor VS Code (`.vscode/mcp.json`), use `servers` and `inputs`:\n\n```json\n{\n  \"servers\": {\n    \"contextplus\": {\n      \"type\": \"stdio\",\n      \"command\": \"bunx\",\n      \"args\": [\"contextplus\"],\n      \"env\": {\n        \"OLLAMA_EMBED_MODEL\": \"nomic-embed-text\",\n        \"OLLAMA_CHAT_MODEL\": \"gemma2:27b\",\n        \"OLLAMA_API_KEY\": \"YOUR_OLLAMA_API_KEY\"\n      }\n    }\n  },\n  \"inputs\": []\n}\n```\n\nIf you prefer `npx`, use:\n\n- `\"command\": \"npx\"`\n- `\"args\": [\"-y\", \"contextplus\"]`\n\nOr generate the MCP config file directly in your current directory:\n\n```bash\nnpx -y contextplus init claude\nbunx contextplus init cursor\nnpx -y contextplus init opencode\n```\n\nSupported coding agent names: `claude`, `cursor`, `vscode`, `windsurf`, `opencode`.\n\nConfig file locations:\n\n| IDE         | Config File          |\n| ----------- | -------------------- |\n| Claude Code | `.mcp.json`          |\n| Cursor      | `.cursor/mcp.json`   |\n| VS Code     | `.vscode/mcp.json`   |\n| Windsurf    | `.windsurf/mcp.json` |\n| OpenCode    | `opencode.json`      |\n\n### CLI Subcommands\n\n- `init [target]` - Generate MCP configuration (targets: `claude`, `cursor`, `vscode`, `windsurf`, `opencode`).\n- `skeleton [path]` or `tree [path]` - **(New)** View the structural tree of a project with file headers and symbol definitions directly in your terminal.\n- `[path]` - Start the MCP server (stdio) for the specified path (defaults to current directory).\n\n### From Source\n\n```bash\nnpm install\nnpm run build\n```\n\n## Embedding Providers\n\nContext+ supports two embedding backends controlled by `CONTEXTPLUS_EMBED_PROVIDER`:\n\n| Provider | Value | Requires | Best For |\n|----------|-------|----------|----------|\n| **Ollama** (default) | `ollama` | Local Ollama server | Free, offline, private |\n| **OpenAI-compatible** | `openai` | API key | Gemini (free tier), OpenAI, Groq, vLLM |\n\n### Ollama (Default)\n\nNo extra configuration needed. Just run Ollama with an embedding model:\n\n```bash\nollama pull nomic-embed-text\nollama serve\n```\n\n### Google Gemini (Free Tier)\n\nFull Claude Code `.mcp.json` example:\n\n```json\n{\n  \"mcpServers\": {\n    \"contextplus\": {\n      \"command\": \"npx\",\n      \"args\": [\"-y\", \"contextplus\"],\n      \"env\": {\n        \"CONTEXTPLUS_EMBED_PROVIDER\": \"openai\",\n        \"CONTEXTPLUS_OPENAI_API_KEY\": \"YOUR_GEMINI_API_KEY\",\n        \"CONTEXTPLUS_OPENAI_BASE_URL\": \"https://generativelanguage.googleapis.com/v1beta/openai\",\n        \"CONTEXTPLUS_OPENAI_EMBED_MODEL\": \"text-embedding-004\"\n      }\n    }\n  }\n}\n```\n\nGet a free API key at [Google AI Studio](https://aistudio.google.com/apikey).\n\n### OpenAI\n\n```json\n{\n  \"mcpServers\": {\n    \"contextplus\": {\n      \"command\": \"npx\",\n      \"args\": [\"-y\", \"contextplus\"],\n      \"env\": {\n        \"CONTEXTPLUS_EMBED_PROVIDER\": \"openai\",\n        \"OPENAI_API_KEY\": \"sk-...\",\n        \"OPENAI_EMBED_MODEL\": \"text-embedding-3-small\"\n      }\n    }\n  }\n}\n```\n\n### Other OpenAI-compatible APIs (Groq, vLLM, LiteLLM)\n\nAny endpoint implementing the [OpenAI Embeddings API](https://platform.openai.com/docs/api-reference/embeddings) works:\n\n```json\n{\n  \"mcpServers\": {\n    \"contextplus\": {\n      \"command\": \"npx\",\n      \"args\": [\"-y\", \"contextplus\"],\n      \"env\": {\n        \"CONTEXTPLUS_EMBED_PROVIDER\": \"openai\",\n        \"CONTEXTPLUS_OPENAI_API_KEY\": \"YOUR_KEY\",\n        \"CONTEXTPLUS_OPENAI_BASE_URL\": \"https://your-proxy.example.com/v1\",\n        \"CONTEXTPLUS_OPENAI_EMBED_MODEL\": \"your-model-name\"\n      }\n    }\n  }\n}\n```\n\n\u003e **Note:** The `semantic_navigate` tool also uses a chat model for cluster labeling. When using the `openai` provider, set `CONTEXTPLUS_OPENAI_CHAT_MODEL` (default: `gpt-4o-mini`).\n\u003e\n\u003e For VS Code, Cursor, or OpenCode, use the same `env` block inside your IDE's MCP config format (see [Config file locations](#setup) table above).\n\n## Architecture\n\nThree layers built with TypeScript over stdio using the Model Context Protocol SDK:\n\n**Core** (`src/core/`) - Multi-language AST parsing (tree-sitter, 43 extensions), gitignore-aware traversal, Ollama vector embeddings with disk cache, wikilink hub graph, in-memory property graph with decay scoring.\n\n**Tools** (`src/tools/`) - 17 MCP tools exposing structural, semantic, operational, and memory graph capabilities.\n\n**Git** (`src/git/`) - Shadow restore point system for undo without touching git history.\n\n**Runtime Cache** (`.mcp_data/`) - created on server startup; stores reusable file, identifier, and call-site embeddings to avoid repeated GPU/CPU embedding work. A realtime tracker refreshes changed files/functions incrementally.\n\n## Config\n\n| Variable                                | Type                      | Default                                | Description                                                   |\n| --------------------------------------- | ------------------------- | -------------------------------------- | ------------------------------------------------------------- |\n| `CONTEXTPLUS_EMBED_PROVIDER`            | string                    | `ollama`                               | Embedding backend: `ollama` or `openai`                      |\n| `OLLAMA_EMBED_MODEL`                    | string                    | `nomic-embed-text`                     | Ollama embedding model                                        |\n| `OLLAMA_API_KEY`                        | string                    | -                                      | Ollama Cloud API key                                          |\n| `OLLAMA_CHAT_MODEL`                     | string                    | `llama3.2`                             | Ollama chat model for cluster labeling                        |\n| `CONTEXTPLUS_OPENAI_API_KEY`            | string                    | -                                      | API key for OpenAI-compatible provider (alias: `OPENAI_API_KEY`) |\n| `CONTEXTPLUS_OPENAI_BASE_URL`           | string                    | `https://api.openai.com/v1`            | OpenAI-compatible endpoint URL (alias: `OPENAI_BASE_URL`)    |\n| `CONTEXTPLUS_OPENAI_EMBED_MODEL`        | string                    | `text-embedding-3-small`               | OpenAI-compatible embedding model (alias: `OPENAI_EMBED_MODEL`) |\n| `CONTEXTPLUS_OPENAI_CHAT_MODEL`         | string                    | `gpt-4o-mini`                          | OpenAI-compatible chat model for labeling (alias: `OPENAI_CHAT_MODEL`) |\n| `CONTEXTPLUS_EMBED_BATCH_SIZE`          | string (parsed as number) | `8`                | Embedding batch size per GPU call, clamped to 5-10            |\n| `CONTEXTPLUS_EMBED_CHUNK_CHARS`         | string (parsed as number) | `2000`             | Per-chunk chars before merge, clamped to 256-8000             |\n| `CONTEXTPLUS_MAX_EMBED_FILE_SIZE`       | string (parsed as number) | `51200`            | Skip non-code text files larger than this many bytes          |\n| `CONTEXTPLUS_EMBED_NUM_GPU`             | string (parsed as number) | -                  | Optional Ollama embed runtime `num_gpu` override              |\n| `CONTEXTPLUS_EMBED_MAIN_GPU`            | string (parsed as number) | -                  | Optional Ollama embed runtime `main_gpu` override             |\n| `CONTEXTPLUS_EMBED_NUM_THREAD`          | string (parsed as number) | -                  | Optional Ollama embed runtime `num_thread` override           |\n| `CONTEXTPLUS_EMBED_NUM_BATCH`           | string (parsed as number) | -                  | Optional Ollama embed runtime `num_batch` override            |\n| `CONTEXTPLUS_EMBED_NUM_CTX`             | string (parsed as number) | -                  | Optional Ollama embed runtime `num_ctx` override              |\n| `CONTEXTPLUS_EMBED_LOW_VRAM`            | string (parsed as boolean)| -                  | Optional Ollama embed runtime `low_vram` override             |\n| `CONTEXTPLUS_EMBED_TRACKER`             | string (parsed as boolean)| `true`             | Enable realtime embedding refresh on file changes             |\n| `CONTEXTPLUS_EMBED_TRACKER_MAX_FILES`   | string (parsed as number) | `8`                | Max changed files processed per tracker tick, clamped to 5-10 |\n| `CONTEXTPLUS_EMBED_TRACKER_DEBOUNCE_MS` | string (parsed as number) | `700`              | Debounce window before tracker refresh                        |\n\n## Test\n\n```bash\nnpm test\nnpm run test:demo\nnpm run test:all\n```\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FForLoopCodes%2Fcontextplus","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FForLoopCodes%2Fcontextplus","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FForLoopCodes%2Fcontextplus/lists"}