{"id":28932293,"url":"https://github.com/juspay/neurolink","last_synced_at":"2026-04-25T21:05:47.431Z","repository":{"id":296644514,"uuid":"993805781","full_name":"juspay/neurolink","owner":"juspay","description":"Universal AI Development Platform with MCP server integration, multi-provider support, and professional CLI. Build, test, and deploy AI applications with multiple ai providers.","archived":false,"fork":false,"pushed_at":"2026-03-30T14:30:24.000Z","size":275787,"stargazers_count":123,"open_issues_count":276,"forks_count":100,"subscribers_count":2,"default_branch":"release","last_synced_at":"2026-03-30T15:07:10.363Z","etag":null,"topics":["agents","ai","ai-development","ai-platform","automation","developer-tools","llm","local-first","mcp","model-context-protocol","neurolink","open-claw","skills","universal-ai"],"latest_commit_sha":null,"homepage":"https://docs.neurolink.ink/","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/juspay.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":".github/CODEOWNERS","security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-05-31T15:05:42.000Z","updated_at":"2026-03-30T06:55:27.000Z","dependencies_parsed_at":"2025-07-06T18:31:59.585Z","dependency_job_id":"4602a159-b502-4265-a99f-ab8107f16fdb","html_url":"https://github.com/juspay/neurolink","commit_stats":null,"previous_names":["juspay/zephyr-mind","juspay/neurolink"],"tags_count":258,"template":false,"template_full_name":null,"purl":"pkg:github/juspay/neurolink","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/juspay%2Fneurolink","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/juspay%2Fneurolink/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/juspay%2Fneurolink/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/juspay%2Fneurolink/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/juspay","download_url":"https://codeload.github.com/juspay/neurolink/tar.gz/refs/heads/release","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/juspay%2Fneurolink/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31290598,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-01T13:12:26.723Z","status":"ssl_error","status_checked_at":"2026-04-01T13:12:25.102Z","response_time":53,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agents","ai","ai-development","ai-platform","automation","developer-tools","llm","local-first","mcp","model-context-protocol","neurolink","open-claw","skills","universal-ai"],"created_at":"2025-06-22T16:41:09.843Z","updated_at":"2026-04-25T21:05:47.421Z","avatar_url":"https://github.com/juspay.png","language":"TypeScript","readme":"# NeuroLink\n\n**The pipe layer for the AI nervous system.**\n\nAI intelligence flows as streams — tokens, tool calls, memory, voice, documents.\nNeuroLink is the vascular layer that carries these streams from where they are\ngenerated (LLM providers: the neurons) to where they are needed (connectors: the organs).\n\n```typescript\nimport { NeuroLink } from \"@juspay/neurolink\";\n\nconst pipe = new NeuroLink();\n\n// Everything is a stream\nconst result = await pipe.stream({ input: { text: \"Hello\" } });\nfor await (const chunk of result.stream) {\n  if (\"content\" in chunk) {\n    process.stdout.write(chunk.content);\n  }\n}\n```\n\n**[→ Docs](https://docs.neurolink.ink) · [→ Quick Start](https://docs.neurolink.ink/docs/getting-started/quick-start) · [→ npm](https://www.npmjs.com/package/@juspay/neurolink)**\n\n---\n\n## 🧠 What is NeuroLink?\n\n**NeuroLink is the universal AI integration platform that unifies 13 major AI providers and 100+ models under one consistent API.**\n\nExtracted from production systems at Juspay and battle-tested at enterprise scale, NeuroLink provides a production-ready solution for integrating AI into any application. Whether you're building with OpenAI, Anthropic, Google, AWS Bedrock, Azure, or any of our 13 supported providers, NeuroLink gives you a single, consistent interface that works everywhere.\n\n**Why NeuroLink?** Switch providers with a single parameter change, leverage 64+ built-in tools and MCP servers, deploy with confidence using enterprise features like Redis memory and multi-provider failover, and optimize costs automatically with intelligent routing. Use it via our professional CLI or TypeScript SDK—whichever fits your workflow.\n\n**Where we're headed:** We're building for the future of AI—edge-first execution and continuous streaming architectures that make AI practically free and universally available. **[Read our vision →](docs/about/vision.md)**\n\n**[Get Started in \u003c5 Minutes →](docs/getting-started/quick-start.md)**\n\n---\n\n## What's New (Q1 2026)\n\n| Feature                             | Version | Description                                                                                                                                                                                                          | Guide                                                                 |\n| ----------------------------------- | ------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------- |\n| **Gemini 3 Multi-turn Tool Fix**    | v9.49.0 | Fixed multi-step agentic tool calling on Vertex AI Gemini 3 models. Correct `thoughtSignature` replay, `stepIndex` parallel-call grouping, `executionId` session isolation, 5-min timeout, silent-timeout surfacing. | [Vertex AI Guide](docs/getting-started/providers/google-vertex.md)    |\n| **AutoResearch**                    | v9.17.0 | Autonomous AI experiment engine: proposes code changes, runs experiments, evaluates metrics, keeps improvements — unattended for hours.                                                                              | [AutoResearch Guide](docs/features/autoresearch.md)                   |\n| **MCP Enhancements**                | v9.16.0 | Advanced MCP features: tool routing, result caching, request batching, annotations, elicitation, custom server base, multi-server management                                                                         | [MCP Enhancements Guide](docs/features/mcp-enhancements.md)           |\n| **Memory**                          | v9.12.0 | Per-user condensed memory that persists across conversations. LLM-powered condensation with S3, Redis, or SQLite backends.                                                                                           | [Memory Guide](docs/features/memory.md)                               |\n| **Context Window Management**       | v9.2.0  | 4-stage compaction pipeline with auto-detection, budget gate at 80% usage, per-provider token estimation                                                                                                             | [Context Compaction Guide](docs/features/context-compaction.md)       |\n| **Tool Execution Control**          | v9.3.0  | `prepareStep` and `toolChoice` support for per-step tool enforcement in multi-step agentic loops. API-level control over tool calls.                                                                                 | [API Reference](docs/api/type-aliases/GenerateOptions.md#preparestep) |\n| **File Processor System**           | v9.1.0  | 17+ file type processors with ProcessorRegistry, security sanitization, SVG text injection                                                                                                                           | [File Processors Guide](docs/features/file-processors.md)             |\n| **RAG with generate()/stream()**    | v9.2.0  | Pass `rag: { files }` to generate/stream for automatic document chunking, embedding, and AI-powered search. 10 chunking strategies, hybrid search, reranking.                                                        | [RAG Guide](docs/features/rag.md)                                     |\n| **External TracerProvider Support** | v8.43.0 | Integrate NeuroLink with existing OpenTelemetry instrumentation. Prevents duplicate registration conflicts.                                                                                                          | [Observability Guide](docs/features/observability.md)                 |\n| **Server Adapters**                 | v8.43.0 | Multi-framework HTTP server with Hono, Express, Fastify, Koa support. Full CLI for server management with foreground/background modes.                                                                               | [Server Adapters Guide](docs/guides/server-adapters/index.md)         |\n| **Title Generation Events**         | v8.38.0 | Emit `conversation:titleGenerated` event when conversation title is generated. Supports custom title prompts via `NEUROLINK_TITLE_PROMPT`.                                                                           | [Conversation Memory Guide](docs/conversation-memory.md)              |\n| **Video Generation with Veo**       | v8.32.0 | Video generation using Veo 3.1 (`veo-3.1`). Realistic video generation with many parameter options                                                                                                                   | [Video Generation Guide](docs/features/video-generation.md)           |\n| **Image Generation with Gemini**    | v8.31.0 | Native image generation using Gemini 2.0 Flash Experimental (`imagen-3.0-generate-002`). High-quality image synthesis directly from Google AI.                                                                       | [Image Generation Guide](docs/image-generation-streaming.md)          |\n| **HTTP/Streamable HTTP Transport**  | v8.29.0 | Connect to remote MCP servers via HTTP with authentication headers, automatic retry with exponential backoff, and configurable rate limiting.                                                                        | [HTTP Transport Guide](docs/mcp-http-transport.md)                    |\n\n- **AutoResearch** – Autonomous AI experiment engine inspired by Karpathy's autoresearch. Phase-gated tool access, git-backed safety, deterministic metric evaluation, and TaskManager integration for continuous unattended research. 12 research tools, 10 typed events, 9 CLI subcommands. → [AutoResearch Guide](docs/features/autoresearch.md)\n- **Memory** – Per-user condensed memory that persists across all conversations. Automatically retrieves and stores memory on each `generate()`/`stream()` call. Supports S3, Redis, and SQLite storage with LLM-powered condensation. → [Memory Guide](docs/features/memory.md)\n- **External TracerProvider Support** – Integrate NeuroLink with applications that already have OpenTelemetry instrumentation. Supports auto-detection and manual configuration. → [Observability Guide](docs/features/observability.md)\n- **Claude Proxy Telemetry** – Bootstrap a local OpenObserve + OTEL collector stack with `neurolink proxy telemetry setup`, import the maintained NeuroLink Proxy Observability dashboard, and inspect proxy logs, traces, metrics, cache reuse, and routing behavior. → [Claude Proxy Guide](docs/features/claude-proxy.md) | [Proxy Observability Guide](docs/features/claude-proxy-observability.md)\n- **Server Adapters** – Deploy NeuroLink as an HTTP API server with your framework of choice (Hono, Express, Fastify, Koa). Full CLI support with `serve` and `server` commands for foreground/background modes, route management, and OpenAPI generation. → [Server Adapters Guide](docs/guides/server-adapters/index.md)\n- **Title Generation Events** – Emit real-time events when conversation titles are auto-generated. Listen to `conversation:titleGenerated` for session tracking. → [Conversation Memory Guide](docs/conversation-memory.md#title-generation-events)\n- **Custom Title Prompts** – Customize conversation title generation with `NEUROLINK_TITLE_PROMPT` environment variable. Use `${userMessage}` placeholder for dynamic prompts. → [Conversation Memory Guide](docs/conversation-memory.md#customizing-the-title-prompt)\n- **Video Generation** – Transform images into 8-second videos with synchronized audio using Google Veo 3.1 via Vertex AI. Supports 720p/1080p resolutions, portrait/landscape aspect ratios. → [Video Generation Guide](docs/features/video-generation.md)\n- **PPT Generation** – Create professional PowerPoint presentations from text prompts with 35 slide types (title, content, charts, timelines, dashboards, composite layouts), 5 themes, and optional AI-generated images. Works with Vertex AI, OpenAI, Anthropic, Google AI, Azure, and Bedrock. → [PPT Generation Guide](docs/features/ppt-generation.md)\n- **Image Generation** – Generate images from text prompts using Gemini models via Vertex AI or Google AI Studio. Supports streaming mode with automatic file saving. → [Image Generation Guide](docs/image-generation-streaming.md)\n- **RAG with generate()/stream()** – Just pass `rag: { files: [\"./docs/guide.md\"] }` to `generate()` or `stream()`. NeuroLink auto-chunks, embeds, and creates a search tool the AI can invoke. 10 chunking strategies, hybrid search, 5 reranker types. → [RAG Guide](docs/features/rag.md)\n- **HTTP/Streamable HTTP Transport for MCP** – Connect to remote MCP servers via HTTP with authentication headers, retry logic, and rate limiting. → [HTTP Transport Guide](docs/mcp-http-transport.md)\n- 🧠 **Gemini 3 Native Multi-turn Tool Calling** — Fixed multi-step agentic tool calling for Gemini 3 models on Vertex AI. The native `@google/genai` path now correctly replays `thoughtSignature` as a sibling field on each `functionCall` part, groups parallel tool calls by `stepIndex`, enforces a 5-minute default timeout on the generate path, and surfaces silent timeouts as proper `TimeoutError` instead of empty responses. Multi-execution session overlap (where `continueOrchestratorWorkflow` restarts the loop on the same `sessionId`) is addressed by an `executionId` per invocation as a composite grouping key — this prevents tool calls from two different executions colliding into the same Gemini model turn and causing the model to return 0 function calls.\n- 🧠 **Gemini 3 Preview Support** - Full support for gemini-3-flash-preview and gemini-3-pro-preview with extended thinking capabilities\n- 🎯 **Tool Execution Control** – Use `prepareStep` to enforce specific tool calls, change the LLM models per step in multi-step agentic executions. Prevents LLMs from skipping required tools. Use `toolChoice` for static control, or `prepareStep` for dynamic per-step logic. → [GenerateOptions Reference](docs/api/type-aliases/GenerateOptions.md#preparestep)\n- **Structured Output with Zod Schemas** – Type-safe JSON generation with automatic validation using `schema` + `output.format: \"json\"` in `generate()`. → [Structured Output Guide](docs/features/structured-output.md)\n- **CSV File Support** – Attach CSV files to prompts for AI-powered data analysis with auto-detection. → [CSV Guide](docs/features/multimodal-chat.md#csv-file-support)\n- **PDF File Support** – Process PDF documents with native visual analysis for Vertex AI, Anthropic, Bedrock, AI Studio. → [PDF Guide](docs/features/pdf-support.md)\n- **50+ File Types** – Process Excel, Word, RTF, JSON, YAML, XML, HTML, SVG, Markdown, and 50+ code languages with intelligent content extraction. → [File Processors Guide](docs/features/file-processors.md)\n- **LiteLLM Integration** – Access 100+ AI models from all major providers through unified interface. → [Setup Guide](docs/litellm-integration.md)\n- **SageMaker Integration** – Deploy and use custom trained models on AWS infrastructure. → [Setup Guide](docs/sagemaker-integration.md)\n- **OpenRouter Integration** – Access 300+ models from OpenAI, Anthropic, Google, Meta, and more through a single unified API. → [Setup Guide](docs/getting-started/providers/openrouter.md)\n- **Human-in-the-loop workflows** – Pause generation for user approval/input before tool execution. → [HITL Guide](docs/features/hitl.md)\n- **Guardrails middleware** – Block PII, profanity, and unsafe content with built-in filtering. → [Guardrails Guide](docs/features/guardrails.md)\n- **Context summarization** – Automatic conversation compression for long-running sessions. → [Summarization Guide](docs/context-summarization.md)\n- **MCP Enhancements** – 14 production-grade modules: tool routing (6 strategies), result caching (LRU/FIFO/LFU), request batching, tool annotations with auto-inference, middleware chain, elicitation protocol, multi-server management, and more. → [MCP Enhancements Guide](docs/features/mcp-enhancements.md)\n- **Redis conversation export** – Export full session history as JSON for analytics and debugging. → [History Guide](docs/features/conversation-history.md)\n\n```typescript\n// Image Generation with Gemini (v8.31.0)\nconst image = await neurolink.generate({\n  input: { text: \"A futuristic cityscape\" },\n  provider: \"google-ai\",\n  model: \"imagen-3.0-generate-002\",\n});\nconsole.log(image.imageOutput?.base64); // Base64-encoded image\n\n// AutoResearch — autonomous experiment loop (v9.17.0)\nimport { resolveConfig, ResearchWorker } from \"@juspay/neurolink/autoresearch\";\n\nconst config = resolveConfig({\n  repoPath: \"/path/to/repo\",\n  mutablePaths: [\"train.py\"],\n  runCommand: \"python3 train.py\",\n  metric: {\n    name: \"val_bpb\",\n    direction: \"lower\",\n    pattern: \"^val_bpb:\\\\s+([\\\\d.]+)\",\n  },\n});\nconst worker = new ResearchWorker(config);\nawait worker.initialize(\"experiment-1\");\nconst result = await worker.runExperimentCycle(\"Try lower learning rate\");\n\n// HTTP Transport for Remote MCP (v8.29.0)\nawait neurolink.addExternalMCPServer(\"remote-tools\", {\n  transport: \"http\",\n  url: \"https://mcp.example.com/v1\",\n  headers: { Authorization: \"Bearer token\" },\n  retries: 3,\n  timeout: 15000,\n});\n```\n\n---\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003ePrevious Updates (Q4 2025)\u003c/strong\u003e\u003c/summary\u003e\n\n- **Image Generation** – Generate images from text prompts using Gemini models via Vertex AI or Google AI Studio. → [Guide](docs/image-generation-streaming.md)\n- **Gemini 3 Preview Support** - Full support for `gemini-3-flash-preview` and `gemini-3-pro-preview` with extended thinking\n- **Structured Output with Zod Schemas** – Type-safe JSON generation with automatic validation. → [Guide](docs/features/structured-output.md)\n- **CSV \u0026 PDF File Support** – Attach CSV/PDF files to prompts with auto-detection. → [CSV](docs/features/multimodal-chat.md#csv-file-support) | [PDF](docs/features/pdf-support.md)\n- **LiteLLM \u0026 SageMaker** – Access 100+ models via LiteLLM, deploy custom models on SageMaker. → [LiteLLM](docs/litellm-integration.md) | [SageMaker](docs/sagemaker-integration.md)\n- **OpenRouter Integration** – Access 300+ models through a single unified API. → [Guide](docs/getting-started/providers/openrouter.md)\n- **HITL \u0026 Guardrails** – Human-in-the-loop approval workflows and content filtering middleware. → [HITL](docs/features/hitl.md) | [Guardrails](docs/features/guardrails.md)\n- **Redis \u0026 Context Management** – Session export, conversation history, and automatic summarization. → [History](docs/features/conversation-history.md)\n\n\u003c/details\u003e\n\n## Enterprise Security: Human-in-the-Loop (HITL)\n\nNeuroLink includes a **production-ready HITL system** for regulated industries and high-stakes AI operations:\n\n| Capability                  | Description                                               | Use Case                                   |\n| --------------------------- | --------------------------------------------------------- | ------------------------------------------ |\n| **Tool Approval Workflows** | Require human approval before AI executes sensitive tools | Financial transactions, data modifications |\n| **Output Validation**       | Route AI outputs through human review pipelines           | Medical diagnosis, legal documents         |\n| **Confidence Thresholds**   | Automatically trigger human review below confidence level | Critical business decisions                |\n| **Complete Audit Trail**    | Full audit logging for compliance (HIPAA, SOC2, GDPR)     | Regulated industries                       |\n\n```typescript\nimport { NeuroLink } from \"@juspay/neurolink\";\n\nconst neurolink = new NeuroLink({\n  hitl: {\n    enabled: true,\n    requireApproval: [\"writeFile\", \"executeCode\", \"sendEmail\"],\n    confidenceThreshold: 0.85,\n    reviewCallback: async (action, context) =\u003e {\n      // Custom review logic - integrate with your approval system\n      return await yourApprovalSystem.requestReview(action);\n    },\n  },\n});\n\n// AI pauses for human approval before executing sensitive tools\nconst result = await neurolink.generate({\n  input: { text: \"Send quarterly report to stakeholders\" },\n});\n```\n\n**[Enterprise HITL Guide](docs/features/enterprise-hitl.md)** | **[Quick Start](docs/features/hitl.md)**\n\n## 📚 Quick Start Guide\n\nThis guide will have you generating AI responses in under 5 minutes using either the SDK or CLI.\n\n### Installation\n\nChoose your preferred package manager:\n\n```bash\n# npm\nnpm install @juspay/neurolink\n\n# pnpm (recommended)\npnpm add @juspay/neurolink\n\n# yarn\nyarn add @juspay/neurolink\n\n# CLI only (no installation needed)\nnpx @juspay/neurolink --help\n```\n\n### Configuration\n\nNeuroLink works with 13+ AI providers. You'll need at least one API key to get started:\n\n**Option 1: Interactive Setup (Recommended)**\n\n```bash\n# Run the setup wizard to configure providers\npnpm dlx @juspay/neurolink setup\n```\n\nThe wizard will guide you through:\n\n- Selecting your preferred AI providers\n- Validating API keys\n- Setting up configuration files\n\n**Option 2: Manual Configuration**\n\nCreate a `.env` file in your project root:\n\n```bash\n# Choose one or more providers\nOPENAI_API_KEY=sk-...\nANTHROPIC_API_KEY=sk-ant-...\nGOOGLE_AI_API_KEY=...\n```\n\n**Free Tier Options:**\n\n- **Google AI Studio**: Get a free API key at [aistudio.google.com](https://aistudio.google.com)\n- **Mistral AI**: Free tier available at [console.mistral.ai](https://console.mistral.ai)\n- **Ollama**: 100% free local models (requires [Ollama installation](https://ollama.ai))\n\n### Your First API Call (SDK)\n\n**Basic Text Generation:**\n\n```typescript\nimport { NeuroLink } from \"@juspay/neurolink\";\n\n// Initialize (auto-selects best available provider from your .env)\nconst neurolink = new NeuroLink();\n\n// Generate a response\nconst result = await neurolink.generate({\n  input: { text: \"Explain quantum computing in simple terms\" },\n});\n\nconsole.log(result.content);\n```\n\n**Streaming Responses:**\n\n```typescript\n// Stream tokens in real-time\nconst stream = await neurolink.stream({\n  input: { text: \"Write a haiku about code\" },\n});\nfor await (const chunk of stream.stream) {\n  if (\"content\" in chunk) process.stdout.write(chunk.content);\n}\n```\n\n**Multimodal Input (Images + Text):**\n\n```typescript\nconst result = await neurolink.generate({\n  input: {\n    text: \"What's in this image?\",\n    images: [\"./photo.jpg\"],\n  },\n});\n```\n\n**Using Tools:**\n\n```typescript\n// Built-in tools are automatically available\nconst result = await neurolink.generate({\n  input: {\n    text: \"What time is it and what files are in the current directory?\",\n  },\n  // AI can call getCurrentTime and listDirectory tools\n});\n```\n\n### Your First API Call (CLI)\n\n**Basic Generation:**\n\n```bash\n# Simple text generation\nnpx @juspay/neurolink generate \"Explain TypeScript generics\"\n\n# Specify provider and model\nnpx @juspay/neurolink generate \"Hello!\" --provider openai --model gpt-4o\n\n# Stream responses\nnpx @juspay/neurolink stream \"Write a story about AI\" --provider anthropic\n```\n\n**Multimodal Input:**\n\n```bash\n# Analyze images\nnpx @juspay/neurolink generate \"Describe this image\" --image photo.jpg\n\n# Process PDFs\nnpx @juspay/neurolink generate \"Summarize this document\" --pdf report.pdf\n\n# Combine multiple file types\nnpx @juspay/neurolink generate \"Analyze this data\" --file data.xlsx --file config.json\n```\n\n**Interactive Loop Mode:**\n\n```bash\n# Start an interactive session with persistent context\nnpx @juspay/neurolink loop\n\n# Inside loop mode:\n\u003e set provider anthropic\n\u003e set model claude-opus-4\n\u003e generate \"Hello, Claude!\"\n\u003e history  # View conversation history\n\u003e exit\n```\n\n### Common Use Cases\n\n**RAG (Retrieval-Augmented Generation):**\n\n```typescript\n// Automatically chunk, embed, and search documents\nconst result = await neurolink.generate({\n  input: { text: \"What are the key features mentioned in the documentation?\" },\n  rag: {\n    files: [\"./docs/guide.md\", \"./docs/api.md\"],\n    chunkSize: 512,\n    topK: 5,\n  },\n});\n```\n\n**Structured Output with Zod:**\n\n```typescript\nimport { z } from \"zod\";\n\nconst schema = z.object({\n  name: z.string(),\n  age: z.number(),\n  email: z.string().email(),\n});\n\nconst result = await neurolink.generate({\n  input: {\n    text: \"Extract user info: John Doe, 30 years old, john@example.com\",\n  },\n  schema,\n  output: { format: \"json\" },\n});\n\n// Parse the structured JSON from result.content\nconst parsed = schema.parse(JSON.parse(result.content));\nconsole.log(parsed); // { name: \"John Doe\", age: 30, email: \"john@example.com\" }\n```\n\n**External MCP Servers (GitHub, Slack, etc.):**\n\n```typescript\n// Connect to GitHub MCP server\nawait neurolink.addExternalMCPServer(\"github\", {\n  command: \"npx\",\n  args: [\"-y\", \"@modelcontextprotocol/server-github\"],\n  transport: \"stdio\",\n  env: { GITHUB_TOKEN: process.env.GITHUB_TOKEN },\n});\n\n// AI can now interact with GitHub\nconst result = await neurolink.generate({\n  input: { text: 'Create an issue titled \"Bug: login fails\"' },\n});\n```\n\n### Next Steps\n\n- **[Complete Documentation](https://docs.neurolink.ink)** - Comprehensive guides and API reference\n- **[Provider Setup Guide](docs/getting-started/provider-setup.md)** - Configure all 13 providers\n- **[SDK API Reference](docs/sdk/api-reference.md)** - Full TypeScript API documentation\n- **[CLI Command Reference](docs/cli/commands.md)** - Complete CLI documentation\n- **[Example Projects](docs/examples/index.md)** - Real-world integration examples\n- **[Advanced Features](docs/advanced/index.md)** - Middleware, observability, workflows\n\n### Troubleshooting\n\n**Issue: \"Provider not configured\"**\n\n- Run `npx @juspay/neurolink setup` or add provider API key to `.env`\n\n**Issue: Rate limit errors**\n\n- Configure multiple providers for redundancy — NeuroLink auto-selects the best available\n- Use `provider: \"litellm\"` with LiteLLM to proxy across many providers\n\n**Issue: Large context overflows**\n\n- Enable conversation memory with compaction: `new NeuroLink({ conversationMemory: { enabled: true } })`\n- Use `rag` option to search documents instead of sending full content\n\nNeed help? Check our [Troubleshooting Guide](docs/reference/troubleshooting.md) or [open an issue](https://github.com/juspay/neurolink/issues).\n\n---\n\n## 🌟 Complete Feature Set\n\nNeuroLink is a comprehensive AI development platform. Every feature below is production-ready and fully documented.\n\n### 🤖 AI Provider Integration\n\n**13 providers unified under one API** - Switch providers with a single parameter change.\n\n| Provider              | Models                                             | Free Tier       | Tool Support | Status        | Documentation                                                                                                                 |\n| --------------------- | -------------------------------------------------- | --------------- | ------------ | ------------- | ----------------------------------------------------------------------------------------------------------------------------- |\n| **OpenAI**            | GPT-4o, GPT-4o-mini, o1                            | ❌              | ✅ Full      | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#openai)                                                                  |\n| **Anthropic**         | Claude 4.5 Opus/Sonnet/Haiku, Claude 4 Opus/Sonnet | ❌              | ✅ Full      | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#anthropic) \\| [Subscription Guide](docs/features/claude-subscription.md) |\n| **Google AI Studio**  | Gemini 3 Flash/Pro, Gemini 2.5 Flash/Pro           | ✅ Free Tier    | ✅ Full      | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#google-ai)                                                               |\n| **AWS Bedrock**       | Claude, Titan, Llama, Nova                         | ❌              | ✅ Full      | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#bedrock)                                                                 |\n| **Google Vertex**     | Gemini 3/2.5 (gemini-3-\\*-preview)                 | ❌              | ✅ Full      | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#vertex)                                                                  |\n| **Azure OpenAI**      | GPT-4, GPT-4o, o1                                  | ❌              | ✅ Full      | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#azure)                                                                   |\n| **LiteLLM**           | 100+ models unified                                | Varies          | ✅ Full      | ✅ Production | [Setup Guide](docs/litellm-integration.md)                                                                                    |\n| **AWS SageMaker**     | Custom deployed models                             | ❌              | ✅ Full      | ✅ Production | [Setup Guide](docs/sagemaker-integration.md)                                                                                  |\n| **Mistral AI**        | Mistral Large, Small                               | ✅ Free Tier    | ✅ Full      | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#mistral)                                                                 |\n| **Hugging Face**      | 100,000+ models                                    | ✅ Free         | ⚠️ Partial   | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#huggingface)                                                             |\n| **Ollama**            | Local models (Llama, Mistral)                      | ✅ Free (Local) | ⚠️ Partial   | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#ollama)                                                                  |\n| **OpenAI Compatible** | Any OpenAI-compatible endpoint                     | Varies          | ✅ Full      | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#openai-compatible)                                                       |\n| **OpenRouter**        | 200+ Models via OpenRouter                         | Varies          | ✅ Full      | ✅ Production | [Setup Guide](docs/getting-started/providers/openrouter.md)                                                                   |\n\n**[📖 Provider Comparison Guide](docs/reference/provider-comparison.md)** - Detailed feature matrix and selection criteria\n**[🔬 Provider Feature Compatibility](docs/reference/provider-feature-compatibility.md)** - Test-based compatibility reference for all 19 features across 13 providers\n\n---\n\n### 🔧 Built-in Tools \u0026 MCP Integration\n\n**6 Core Tools** (work across all providers, zero configuration):\n\n| Tool                 | Purpose                  | Auto-Available          | Documentation                              |\n| -------------------- | ------------------------ | ----------------------- | ------------------------------------------ |\n| `getCurrentTime`     | Real-time clock access   | ✅                      | [Tool Reference](docs/sdk/custom-tools.md) |\n| `readFile`           | File system reading      | ✅                      | [Tool Reference](docs/sdk/custom-tools.md) |\n| `writeFile`          | File system writing      | ✅                      | [Tool Reference](docs/sdk/custom-tools.md) |\n| `listDirectory`      | Directory listing        | ✅                      | [Tool Reference](docs/sdk/custom-tools.md) |\n| `calculateMath`      | Mathematical operations  | ✅                      | [Tool Reference](docs/sdk/custom-tools.md) |\n| `websearchGrounding` | Google Vertex web search | ⚠️ Requires credentials | [Tool Reference](docs/sdk/custom-tools.md) |\n\n**58+ External MCP Servers** supported (GitHub, PostgreSQL, Google Drive, Slack, and more):\n\n```typescript\n// stdio transport - local MCP servers via command execution\nawait neurolink.addExternalMCPServer(\"github\", {\n  command: \"npx\",\n  args: [\"-y\", \"@modelcontextprotocol/server-github\"],\n  transport: \"stdio\",\n  env: { GITHUB_TOKEN: process.env.GITHUB_TOKEN },\n});\n\n// HTTP transport - remote MCP servers via URL\nawait neurolink.addExternalMCPServer(\"github-copilot\", {\n  transport: \"http\",\n  url: \"https://api.githubcopilot.com/mcp\",\n  headers: { Authorization: \"Bearer YOUR_COPILOT_TOKEN\" },\n  timeout: 15000,\n  retries: 5,\n});\n\n// Tools automatically available to AI\nconst result = await neurolink.generate({\n  input: { text: 'Create a GitHub issue titled \"Bug in auth flow\"' },\n});\n```\n\n**MCP Transport Options:**\n\n| Transport   | Use Case       | Key Features                                    |\n| ----------- | -------------- | ----------------------------------------------- |\n| `stdio`     | Local servers  | Command execution, environment variables        |\n| `http`      | Remote servers | URL-based, auth headers, retries, rate limiting |\n| `sse`       | Event streams  | Server-Sent Events, real-time updates           |\n| `websocket` | Bi-directional | Full-duplex communication                       |\n\n**[📖 MCP Integration Guide](docs/advanced/mcp-integration.md)** - Setup external servers\n**[📖 HTTP Transport Guide](docs/mcp-http-transport.md)** - Remote MCP server configuration\n\n---\n\n### 🔌 MCP Enhancements\n\n**Production-grade MCP capabilities** for managing tool calls at scale across multi-server environments:\n\n| Module                        | Purpose                                                    |\n| ----------------------------- | ---------------------------------------------------------- |\n| **Tool Router**               | Intelligent routing across servers with 6 strategies       |\n| **Tool Cache**                | Result caching with LRU, FIFO, and LFU eviction            |\n| **Request Batcher**           | Automatic batching of tool calls for throughput            |\n| **Tool Annotations**          | Safety metadata and behavior hints for MCP tools           |\n| **Tool Converter**            | Bidirectional conversion between NeuroLink and MCP formats |\n| **Elicitation Protocol**      | Interactive user input during tool execution (HITL)        |\n| **Multi-Server Manager**      | Load balancing and failover across server groups           |\n| **MCP Server Base**           | Abstract base class for building custom MCP servers        |\n| **Enhanced Tool Discovery**   | Advanced search and filtering across servers               |\n| **Agent \u0026 Workflow Exposure** | Expose agents and workflows as MCP tools                   |\n| **Server Capabilities**       | Resource and prompt management per MCP spec                |\n| **Registry Client**           | Discover and connect to MCP servers from registries        |\n| **Tool Integration**          | End-to-end tool lifecycle with middleware chain            |\n| **Elicitation Manager**       | Manages elicitation flows with validation and timeouts     |\n\n```typescript\nimport { ToolRouter, ToolCache, RequestBatcher } from \"@juspay/neurolink\";\n\n// Route tool calls across multiple MCP servers\nconst router = new ToolRouter({\n  strategy: \"capability-based\",\n  servers: [\n    { name: \"github\", url: \"https://mcp-github.example.com\" },\n    { name: \"db\", url: \"https://mcp-postgres.example.com\" },\n  ],\n});\n\n// Cache repeated tool results (LRU, FIFO, or LFU)\nconst cache = new ToolCache({ strategy: \"lru\", maxSize: 500, ttl: 60_000 });\n\n// Batch concurrent tool calls for throughput\nconst batcher = new RequestBatcher({ maxBatchSize: 10, maxWaitMs: 50 });\n```\n\n**[📖 MCP Enhancements Guide](docs/features/mcp-enhancements.md)** - Full reference for all 14 modules\n\n---\n\n### 💻 Developer Experience Features\n\n**SDK-First Design** with TypeScript, IntelliSense, and type safety:\n\n| Feature                     | Description                                                                       | Documentation                                             |\n| --------------------------- | --------------------------------------------------------------------------------- | --------------------------------------------------------- |\n| **Auto Provider Selection** | Intelligent provider fallback                                                     | [SDK Guide](docs/sdk/index.md#auto-selection)             |\n| **Streaming Responses**     | Real-time token streaming                                                         | [Streaming Guide](docs/advanced/streaming.md)             |\n| **Conversation Memory**     | Automatic context management with embedded per-user memory                        | [Memory Guide](docs/sdk/index.md#memory)                  |\n| **Full Type Safety**        | Complete TypeScript types                                                         | [Type Reference](docs/sdk/api-reference.md)               |\n| **Error Handling**          | Graceful provider fallback                                                        | [Error Guide](docs/reference/troubleshooting.md)          |\n| **Analytics \u0026 Evaluation**  | Usage tracking, quality scores                                                    | [Analytics Guide](docs/advanced/analytics.md)             |\n| **Middleware System**       | Request/response hooks                                                            | [Middleware Guide](docs/custom-middleware-guide.md)       |\n| **Framework Integration**   | Next.js, SvelteKit, Express                                                       | [Framework Guides](docs/sdk/framework-integration.md)     |\n| **Extended Thinking**       | Native thinking/reasoning mode for Gemini 3 and Claude models                     | [Thinking Guide](docs/features/thinking-configuration.md) |\n| **RAG Document Processing** | `rag: { files }` on generate/stream with 10 chunking strategies and hybrid search | [RAG Guide](docs/features/rag.md)                         |\n\n---\n\n### 📁 Multimodal \u0026 File Processing\n\n**17+ file categories supported** (50+ total file types including code languages) with intelligent content extraction and provider-agnostic processing:\n\n| Category      | Supported Types                                            | Processing                          |\n| ------------- | ---------------------------------------------------------- | ----------------------------------- |\n| **Documents** | Excel (`.xlsx`, `.xls`), Word (`.docx`), RTF, OpenDocument | Sheet extraction, text extraction   |\n| **Data**      | JSON, YAML, XML                                            | Validation, syntax highlighting     |\n| **Markup**    | HTML, SVG, Markdown, Text                                  | OWASP-compliant sanitization        |\n| **Code**      | 50+ languages (TypeScript, Python, Java, Go, etc.)         | Language detection, syntax metadata |\n| **Config**    | `.env`, `.ini`, `.toml`, `.cfg`                            | Secure parsing                      |\n| **Media**     | Images (PNG, JPEG, WebP, GIF), PDFs, CSV                   | Provider-specific formatting        |\n\n```typescript\n// Process any supported file type\nconst result = await neurolink.generate({\n  input: {\n    text: \"Analyze this data and code\",\n    files: [\n      \"./data.xlsx\", // Excel spreadsheet\n      \"./config.yaml\", // YAML configuration\n      \"./diagram.svg\", // SVG (injected as sanitized text)\n      \"./main.py\", // Python source code\n    ],\n  },\n});\n\n// CLI: Use --file for any supported type\n// neurolink generate \"Analyze this\" --file ./report.xlsx --file ./config.json\n```\n\n**Key Features:**\n\n- **ProcessorRegistry** - Priority-based processor selection with fallback\n- **OWASP Security** - HTML/SVG sanitization prevents XSS attacks\n- **Auto-detection** - FileDetector identifies file types by extension and content\n- **Provider-agnostic** - All processors work across all 13 AI providers\n\n**[📖 File Processors Guide](docs/features/file-processors.md)** - Complete reference for all file types\n\n---\n\n### 🏢 Enterprise \u0026 Production Features\n\n**Production-ready capabilities for regulated industries:**\n\n| Feature                     | Description                                 | Use Case                  | Documentation                                               |\n| --------------------------- | ------------------------------------------- | ------------------------- | ----------------------------------------------------------- |\n| **Enterprise Proxy**        | Corporate proxy support                     | Behind firewalls          | [Proxy Setup](docs/enterprise-proxy-setup.md)               |\n| **Redis Memory**            | Distributed conversation state              | Multi-instance deployment | [Redis Guide](docs/getting-started/provider-setup.md#redis) |\n| **Memory**                  | Per-user condensed memory (S3/Redis/SQLite) | Long-term user context    | [Memory Guide](docs/features/memory.md)                     |\n| **Cost Optimization**       | Automatic cheapest model selection          | Budget control            | [Cost Guide](docs/advanced/index.md)                        |\n| **Multi-Provider Failover** | Automatic provider switching                | High availability         | [Failover Guide](docs/advanced/index.md)                    |\n| **Telemetry \u0026 Monitoring**  | OpenTelemetry integration                   | Observability             | [Telemetry Guide](docs/telemetry-guide.md)                  |\n| **Security Hardening**      | Credential management, auditing             | Compliance                | [Security Guide](docs/advanced/enterprise.md)               |\n| **Custom Model Hosting**    | SageMaker integration                       | Private models            | [SageMaker Guide](docs/sagemaker-integration.md)            |\n| **Load Balancing**          | LiteLLM proxy integration                   | Scale \u0026 routing           | [Load Balancing](docs/litellm-integration.md)               |\n\n**Security \u0026 Compliance:**\n\n- ✅ SOC2 Type II compliant deployments\n- ✅ ISO 27001 certified infrastructure compatible\n- ✅ GDPR-compliant data handling (EU providers available)\n- ✅ HIPAA compatible (with proper configuration)\n- ✅ Hardened OS verified (SELinux, AppArmor)\n- ✅ Zero credential logging\n- ✅ Encrypted configuration storage\n- ✅ Automatic context window management with 4-stage compaction pipeline and 80% budget gate\n\n**[📖 Enterprise Deployment Guide](docs/advanced/enterprise.md)** - Complete production checklist\n\n---\n\n## Enterprise Persistence: Redis Memory\n\nProduction-ready distributed conversation state for multi-instance deployments:\n\n### Capabilities\n\n| Feature                | Description                                  | Benefit                     |\n| ---------------------- | -------------------------------------------- | --------------------------- |\n| **Distributed Memory** | Share conversation context across instances  | Horizontal scaling          |\n| **Session Export**     | Export full history as JSON                  | Analytics, debugging, audit |\n| **Auto-Detection**     | Automatic Redis discovery from environment   | Zero-config in containers   |\n| **Graceful Failover**  | Falls back to in-memory if Redis unavailable | High availability           |\n| **TTL Management**     | Configurable session expiration              | Memory management           |\n\n### Quick Setup\n\n```typescript\nimport { NeuroLink } from \"@juspay/neurolink\";\n\n// Auto-detect Redis from REDIS_URL environment variable\nconst neurolink = new NeuroLink({\n  conversationMemory: {\n    enabled: true,\n    enableSummarization: true,\n  },\n});\n\n// Or explicit Redis configuration\nconst neurolinkExplicit = new NeuroLink({\n  conversationMemory: {\n    enabled: true,\n    redisConfig: {\n      host: \"redis.example.com\",\n      port: 6379,\n      password: process.env.REDIS_PASSWORD,\n      ttl: 86400, // 24-hour session expiration (seconds)\n    },\n  },\n});\n\n// Retrieve conversation history for analytics\nconst history = await neurolink.getConversationHistory(\"session-id\");\nawait saveToDataWarehouse(history);\n```\n\n### Docker Quick Start\n\n```bash\n# Start Redis\ndocker run -d --name neurolink-redis -p 6379:6379 redis:7-alpine\n\n# Configure NeuroLink\nexport REDIS_URL=redis://localhost:6379\n\n# Start your application\nnode your-app.js\n```\n\n**[Redis Setup Guide](docs/getting-started/redis-quickstart.md)** | **[Production Configuration](docs/guides/redis-configuration.md)** | **[Migration Patterns](docs/guides/redis-migration.md)**\n\n---\n\n### 🎨 Professional CLI\n\n**15+ commands** for every workflow:\n\n| Command          | Purpose                              | Example                    | Documentation                             |\n| ---------------- | ------------------------------------ | -------------------------- | ----------------------------------------- |\n| `setup`          | Interactive provider configuration   | `neurolink setup`          | [Setup Guide](docs/cli/index.md)          |\n| `generate`       | Text generation                      | `neurolink gen \"Hello\"`    | [Generate](docs/cli/commands.md#generate) |\n| `stream`         | Streaming generation                 | `neurolink stream \"Story\"` | [Stream](docs/cli/commands.md#stream)     |\n| `status`         | Provider health check                | `neurolink status`         | [Status](docs/cli/commands.md#status)     |\n| `loop`           | Interactive session                  | `neurolink loop`           | [Loop](docs/cli/commands.md#loop)         |\n| `mcp`            | MCP server management                | `neurolink mcp discover`   | [MCP CLI](docs/cli/commands.md#mcp)       |\n| `models`         | Model listing                        | `neurolink models`         | [Models](docs/cli/commands.md#models)     |\n| `eval`           | Model evaluation                     | `neurolink eval`           | [Eval](docs/cli/commands.md#eval)         |\n| `serve`          | Start HTTP server in foreground mode | `neurolink serve`          | [Serve](docs/cli/commands.md#serve)       |\n| `server start`   | Start HTTP server in background mode | `neurolink server start`   | [Server](docs/cli/commands.md#server)     |\n| `server stop`    | Stop running background server       | `neurolink server stop`    | [Server](docs/cli/commands.md#server)     |\n| `server status`  | Show server status information       | `neurolink server status`  | [Server](docs/cli/commands.md#server)     |\n| `server routes`  | List all registered API routes       | `neurolink server routes`  | [Server](docs/cli/commands.md#server)     |\n| `server config`  | View or modify server configuration  | `neurolink server config`  | [Server](docs/cli/commands.md#server)     |\n| `server openapi` | Generate OpenAPI specification       | `neurolink server openapi` | [Server](docs/cli/commands.md#server)     |\n| `rag chunk`      | Chunk documents for RAG              | `neurolink rag chunk f.md` | [RAG CLI](docs/cli/commands.md#rag)       |\n\n**RAG flags** are available on `generate` and `stream`: `--rag-files`, `--rag-strategy`, `--rag-chunk-size`, `--rag-chunk-overlap`, `--rag-top-k`\n\n**[📖 Complete CLI Reference](docs/cli/commands.md)** - All commands and options\n\n---\n\n### 🤖 GitHub Action\n\nRun AI-powered workflows directly in GitHub Actions with 13 provider support and automatic PR/issue commenting.\n\n```yaml\n- uses: juspay/neurolink@v1\n  with:\n    anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}\n    prompt: \"Review this PR for security issues and code quality\"\n    post_comment: true\n```\n\n| Feature                | Description                                                                               |\n| ---------------------- | ----------------------------------------------------------------------------------------- |\n| **Multi-Provider**     | 13 providers with unified interface                                                       |\n| **PR/Issue Comments**  | Auto-post AI responses with intelligent updates                                           |\n| **Multimodal Support** | Attach images, PDFs, CSVs, Excel, Word, JSON, YAML, XML, HTML, SVG, code files to prompts |\n| **Cost Tracking**      | Built-in analytics and quality evaluation                                                 |\n| **Extended Thinking**  | Deep reasoning with thinking tokens                                                       |\n\n**[📖 GitHub Action Guide](docs/guides/github-action.md)** - Complete setup and examples\n\n---\n\n## 💰 Smart Model Selection\n\nNeuroLink features intelligent model selection and cost optimization:\n\n### Cost Optimization Features\n\n- **💰 Automatic Cost Optimization**: Selects cheapest models for simple tasks\n- **🔄 LiteLLM Model Routing**: Access 100+ models with automatic load balancing\n- **🔍 Capability-Based Selection**: Find models with specific features (vision, function calling)\n- **⚡ Intelligent Fallback**: Seamless switching when providers fail\n\n```bash\n# Cost optimization - automatically use cheapest model\nnpx @juspay/neurolink generate \"Hello\" --optimize-cost\n\n# LiteLLM specific model selection\nnpx @juspay/neurolink generate \"Complex analysis\" --provider litellm --model \"anthropic/claude-3-5-sonnet\"\n\n# Auto-select best available provider\nnpx @juspay/neurolink generate \"Write code\" # Automatically chooses optimal provider\n```\n\n## Revolutionary Interactive CLI\n\nNeuroLink's CLI goes beyond simple commands - it's a **full AI development environment**:\n\n### Why Interactive Mode Changes Everything\n\n| Feature       | Traditional CLI   | NeuroLink Interactive          |\n| ------------- | ----------------- | ------------------------------ |\n| Session State | None              | Full persistence               |\n| Memory        | Per-command       | Conversation-aware             |\n| Configuration | Flags per command | `/set` persists across session |\n| Tool Testing  | Manual per tool   | Live discovery \u0026 testing       |\n| Streaming     | Optional          | Real-time default              |\n\n### Live Demo: Development Session\n\n```bash\n$ npx @juspay/neurolink loop --enable-conversation-memory\n\nneurolink \u003e /set provider vertex\n✓ provider set to vertex (Gemini 3 support enabled)\n\nneurolink \u003e /set model gemini-3-flash-preview\n✓ model set to gemini-3-flash-preview\n\nneurolink \u003e Analyze my project architecture and suggest improvements\n\n✓ Analyzing your project structure...\n[AI provides detailed analysis, remembering context]\n\nneurolink \u003e Now implement the first suggestion\n[AI remembers previous context and implements suggestion]\n\nneurolink \u003e /mcp discover\n✓ Discovered 58 MCP tools:\n   GitHub: create_issue, list_repos, create_pr...\n   PostgreSQL: query, insert, update...\n   [full list]\n\nneurolink \u003e Use the GitHub tool to create an issue for this improvement\n✓ Creating issue... (requires HITL approval if configured)\n\nneurolink \u003e /export json \u003e session-2026-01-01.json\n✓ Exported 15 messages to session-2026-01-01.json\n\nneurolink \u003e exit\nSession saved. Resume with: neurolink loop --session session-2026-01-01.json\n```\n\n### Session Commands Reference\n\n| Command              | Purpose                                              |\n| -------------------- | ---------------------------------------------------- |\n| `/set \u003ckey\u003e \u003cvalue\u003e` | Persist configuration (provider, model, temperature) |\n| `/mcp discover`      | List all available MCP tools                         |\n| `/export json`       | Export conversation to JSON                          |\n| `/history`           | View conversation history                            |\n| `/clear`             | Clear context while keeping settings                 |\n\n**[Interactive CLI Guide](docs/features/interactive-cli.md)** | **[CLI Reference](docs/cli/commands.md)**\n\nSkip the wizard and configure manually? See [`docs/getting-started/provider-setup.md`](docs/getting-started/provider-setup.md).\n\n## CLI \u0026 SDK Essentials\n\n`neurolink` CLI mirrors the SDK so teams can script experiments and codify them later.\n\n```bash\n# Discover available providers and models\nnpx @juspay/neurolink status\nnpx @juspay/neurolink models list --provider google-ai\n\n# Route to a specific provider/model\nnpx @juspay/neurolink generate \"Summarize customer feedback\" \\\n  --provider azure --model gpt-4o-mini\n\n# Turn on analytics + evaluation for observability\nnpx @juspay/neurolink generate \"Draft release notes\" \\\n  --enable-analytics --enable-evaluation --format json\n\n# RAG: Ask questions about your docs (auto-chunks, embeds, searches)\nnpx @juspay/neurolink generate \"What are the key features?\" \\\n  --rag-files ./docs/guide.md ./docs/api.md --rag-strategy markdown\n\n# Claude proxy + local OpenObserve dashboard\nnpx @juspay/neurolink proxy setup\nnpx @juspay/neurolink proxy telemetry setup\nnpx @juspay/neurolink proxy status --format json\n```\n\n```typescript\nimport { NeuroLink } from \"@juspay/neurolink\";\n\nconst neurolink = new NeuroLink({\n  conversationMemory: {\n    enabled: true,\n  },\n  enableOrchestration: true,\n});\n\nconst result = await neurolink.generate({\n  input: {\n    text: \"Create a comprehensive analysis\",\n    files: [\n      \"./sales_data.csv\", // Auto-detected as CSV\n      \"examples/data/invoice.pdf\", // Auto-detected as PDF\n      \"./diagrams/architecture.png\", // Auto-detected as image\n      \"./report.xlsx\", // Auto-detected as Excel\n      \"./config.json\", // Auto-detected as JSON\n      \"./diagram.svg\", // Auto-detected as SVG (injected as text)\n      \"./app.ts\", // Auto-detected as TypeScript code\n    ],\n  },\n  provider: \"vertex\", // PDF-capable provider (see docs/features/pdf-support.md)\n  enableEvaluation: true,\n  region: \"us-east-1\",\n});\n\nconsole.log(result.content);\nconsole.log(result.evaluation?.overallScore);\n\n// RAG: Ask questions about your documents\nconst answer = await neurolink.generate({\n  input: { text: \"What are the main architectural decisions?\" },\n  rag: {\n    files: [\"./docs/architecture.md\", \"./docs/decisions.md\"],\n    strategy: \"markdown\",\n    topK: 5,\n  },\n});\nconsole.log(answer.content); // AI searches your docs and answers\n```\n\n### Gemini 3 with Extended Thinking\n\n```typescript\nimport { NeuroLink } from \"@juspay/neurolink\";\n\nconst neurolink = new NeuroLink();\n\n// Use Gemini 3 with extended thinking for complex reasoning\nconst result = await neurolink.generate({\n  input: {\n    text: \"Solve this step by step: What is the optimal strategy for...\",\n  },\n  provider: \"vertex\",\n  model: \"gemini-3-flash-preview\",\n  thinkingConfig: {\n    thinkingLevel: \"medium\", // Options: \"minimal\", \"low\", \"medium\", \"high\"\n  },\n});\n\nconsole.log(result.content);\n```\n\nFull command and API breakdown lives in [`docs/cli/commands.md`](docs/cli/commands.md) and [`docs/sdk/api-reference.md`](docs/sdk/api-reference.md).\n\n## Platform Capabilities at a Glance\n\n| Capability               | Highlights                                                                                                               |\n| ------------------------ | ------------------------------------------------------------------------------------------------------------------------ |\n| **Provider unification** | 13+ providers with automatic fallback, cost-aware routing, provider orchestration (Q3).                                  |\n| **Multimodal pipeline**  | Stream images + CSV data + PDF documents across providers with local/remote assets. Auto-detection for mixed file types. |\n| **Quality \u0026 governance** | Auto-evaluation engine (Q3), guardrails middleware (Q4), HITL workflows (Q4), audit logging.                             |\n| **Memory \u0026 context**     | Conversation memory, Redis history export (Q4), context summarization (Q4).                                              |\n| **CLI tooling**          | Loop sessions (Q3), setup wizard, config validation, Redis auto-detect, JSON output.                                     |\n| **Enterprise ops**       | Proxy support, regional routing (Q3), telemetry hooks, local OpenObserve dashboard setup, configuration management.      |\n| **Tool ecosystem**       | MCP auto discovery, HTTP/stdio/SSE/WebSocket transports, LiteLLM hub access, SageMaker custom deployment, web search.    |\n\n## Documentation Map\n\n| Area            | When to Use                                               | Link                                                             |\n| --------------- | --------------------------------------------------------- | ---------------------------------------------------------------- |\n| Getting started | Install, configure, run first prompt                      | [`docs/getting-started/index.md`](docs/getting-started/index.md) |\n| Feature guides  | Understand new functionality front-to-back                | [`docs/features/index.md`](docs/features/index.md)               |\n| CLI reference   | Command syntax, flags, loop sessions                      | [`docs/cli/index.md`](docs/cli/index.md)                         |\n| SDK reference   | Classes, methods, options                                 | [`docs/sdk/index.md`](docs/sdk/index.md)                         |\n| RAG             | Document chunking, hybrid search, reranking, `rag:{}` API | [`docs/features/rag.md`](docs/features/rag.md)                   |\n| Integrations    | LiteLLM, SageMaker, MCP                                   | [`docs/litellm-integration.md`](docs/litellm-integration.md)     |\n| Advanced        | Middleware, architecture, streaming patterns              | [`docs/advanced/index.md`](docs/advanced/index.md)               |\n| Cookbook        | Practical recipes for common patterns                     | [`docs/cookbook/index.md`](docs/cookbook/index.md)               |\n| Guides          | Migration, Redis, troubleshooting, provider selection     | [`docs/guides/index.md`](docs/guides/index.md)                   |\n| Operations      | Configuration, troubleshooting, provider matrix           | [`docs/reference/index.md`](docs/reference/index.md)             |\n\n### New in 2026: Enhanced Documentation\n\n**Enterprise Features:**\n\n- [Enterprise HITL Guide](docs/features/enterprise-hitl.md) - Production-ready approval workflows\n- [Interactive CLI Guide](docs/features/interactive-cli.md) - AI development environment\n- [MCP Tools Showcase](docs/features/mcp-tools-showcase.md) - 58+ external tools \u0026 6 built-in tools\n\n**Provider Intelligence:**\n\n- [Provider Capabilities Audit](docs/reference/provider-capabilities-audit.md) - Technical capabilities matrix\n- [Provider Selection Guide](docs/guides/provider-selection.md) - Interactive decision wizard\n- [Provider Comparison](docs/reference/provider-comparison.md) - Feature \u0026 cost comparison\n\n**Middleware System:**\n\n- [Middleware Architecture](docs/advanced/middleware-architecture.md) - Complete lifecycle \u0026 patterns\n- [Built-in Middleware](docs/advanced/builtin-middleware.md) - Analytics, Guardrails, Evaluation\n- [Custom Middleware Guide](docs/custom-middleware-guide.md) - Build your own\n\n**Redis \u0026 Persistence:**\n\n- [Redis Quick Start](docs/getting-started/redis-quickstart.md) - 5-minute setup\n- [Redis Configuration](docs/guides/redis-configuration.md) - Production-ready setup\n- [Redis Migration](docs/guides/redis-migration.md) - Migration patterns\n\n**Migration Guides:**\n\n- [From LangChain](docs/guides/migration/from-langchain.md) - Complete migration guide\n- [From Vercel AI SDK](docs/guides/migration/from-vercel-ai-sdk.md) - Next.js focused\n\n**Developer Experience:**\n\n- [Cookbook](docs/cookbook/index.md) - 10 practical recipes\n- [Troubleshooting Guide](docs/guides/troubleshooting.md) - Common issues \u0026 solutions\n\n## Integrations\n\n- **LiteLLM 100+ model hub** – Unified access to third-party models via LiteLLM routing. → [`docs/litellm-integration.md`](docs/litellm-integration.md)\n- **Amazon SageMaker** – Deploy and call custom endpoints directly from NeuroLink CLI/SDK. → [`docs/sagemaker-integration.md`](docs/sagemaker-integration.md)\n- **Enterprise proxy \u0026 security** – Configure outbound policies and compliance posture. → [`docs/enterprise-proxy-setup.md`](docs/enterprise-proxy-setup.md)\n- **Configuration automation** – Manage environments, regions, and credentials safely. → [`docs/configuration-management.md`](docs/configuration-management.md)\n- **MCP tool ecosystem** – Auto-discover Model Context Protocol tools and extend workflows. → [`docs/advanced/mcp-integration.md`](docs/advanced/mcp-integration.md)\n- **Remote MCP via HTTP** – Connect to HTTP-based MCP servers with authentication, retries, and rate limiting. → [`docs/mcp-http-transport.md`](docs/mcp-http-transport.md)\n\n## Contributing \u0026 Support\n\n- Bug reports and feature requests → [GitHub Issues](https://github.com/juspay/neurolink/issues)\n- Development workflow, testing, and pull request guidelines → [`docs/development/contributing.md`](docs/development/contributing.md)\n- Documentation improvements → open a PR referencing the [documentation matrix](docs/tracking/FEATURE-DOC-MATRIX.md).\n\n---\n\nNeuroLink is built with ❤️ by Juspay. Contributions, questions, and production feedback are always welcome.\n","funding_links":[],"categories":["Aggregators \u0026 Gateways","Ai Integration Mcp Servers","Tools \u0026 Code","SDKs","Frameworks","CLIs","Developer Tools","Python","Built with TypeScript","🌟 Core Frameworks","Orchestration","LLMOps","Libraries","Aggregators","📚 Projects (2474 total)","CI/CD \u0026 DevOps Pipelines","Machine Learning Platform","Building"],"sub_categories":["Platforms \u0026 Registries","JavaScript/TypeScript","How to Submit","General-Purpose Machine Learning","Libraries","Application Framework","Observability","MCP Servers","🔗 Aggregators","Application Frameworks","Frameworks"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjuspay%2Fneurolink","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fjuspay%2Fneurolink","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjuspay%2Fneurolink/lists"}