https://github.com/juspay/neurolink
Universal AI Development Platform with MCP server integration, multi-provider support, and professional CLI. Build, test, and deploy AI applications with multiple ai providers.
https://github.com/juspay/neurolink
ai ai-development ai-platform automation developer-tools hacktoberfest llm mcp model-context-protocol openai universal-ai
Last synced: 2 days ago
JSON representation
Universal AI Development Platform with MCP server integration, multi-provider support, and professional CLI. Build, test, and deploy AI applications with multiple ai providers.
- Host: GitHub
- URL: https://github.com/juspay/neurolink
- Owner: juspay
- License: mit
- Created: 2025-05-31T15:05:42.000Z (8 months ago)
- Default Branch: release
- Last Pushed: 2026-01-27T07:24:47.000Z (7 days ago)
- Last Synced: 2026-01-27T20:26:08.062Z (6 days ago)
- Topics: ai, ai-development, ai-platform, automation, developer-tools, hacktoberfest, llm, mcp, model-context-protocol, openai, universal-ai
- Language: TypeScript
- Homepage: https://docs.neurolink.ink
- Size: 213 MB
- Stars: 101
- Watchers: 2
- Forks: 90
- Open Issues: 414
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
- Codeowners: .github/CODEOWNERS
- Security: SECURITY.md
Awesome Lists containing this project
- awesome-mcp - juspay/neurolink - provider support β`103` (Aggregators & Gateways / Platforms & Registries)
- awesome-mcp-servers - Neurolink - Neurolink is an edge-first AI infrastructure platform that unifies multiple providers and models, exposing them via an MCP server so agents can access enterprise AI capabilities in a standardized way. ([Read more](/details/neurolink.md)) `ai-platform` `multi-provider` `infrastructure` (Ai Integration Mcp Servers)
- Awesome-Prompt-Engineering - [Github
- awesome-llm-agents - Neurolink - Multi-provider AI agent framework (Frameworks)
- toolsdk-mcp-registry - β @juspay/neurolink - in MCP support, multi-provider failover, and production-ready enterprise features (node) (Developer Tools / How to Submit)
- awesome-machine-learning - Neurolink - Enterprise-grade LLM integration framework for building production-ready AI applications with built-in hallucination prevention, RAG, and MCP support. (Python / General-Purpose Machine Learning)
- awesome-typescript - Neurolink - Universal AI development platform that unifies 12+ AI providers (OpenAI, Anthropic, Google, Bedrock, Azure) with MCP support, multi-provider failover, and production-ready enterprise features. TypeScript SDK + CLI. (Built with TypeScript / Libraries)
- awesome-ai-agents - Neurolink - provider AI agent framework, unifies 12+ LLM providers, workflow orchestration | (π Core Frameworks)
- Awesome-LLMOps - Neurolink - provider support, and professional CLI. Build, test, and deploy AI applications with multiple ai providers.    (Orchestration / Application Framework)
- awesome-llmops - Neurolink - provider AI agent framework that unifies 12+ LLM providers (OpenAI, Google, Anthropic, AWS, Azure, Groq, etc.) with workflow orchestration. Production-grade platform for building LLM applications with streaming, tool calling, caching, and enterprise features. Battle-tested at 15M+ requests/month. |  | (LLMOps / Observability)
- fucking-awesome-machine-learning - Neurolink - Enterprise-grade LLM integration framework for building production-ready AI applications with built-in hallucination prevention, RAG, and MCP support. (Python / General-Purpose Machine Learning)
- Awesome-RAG - Neurolink
- best-of-mcp-servers - GitHub - 84% open Β· β±οΈ 23.01.2026) (Aggregators)
- llmops - Neurolink - square) | (Orchestration / Application Frameworks)
README
π§ NeuroLink
The Enterprise AI SDK for Production Applications
12 Providers | 58+ MCP Tools | HITL Security | Redis Persistence
[](https://www.npmjs.com/package/@juspay/neurolink)
[](https://www.npmjs.com/package/@juspay/neurolink)
[](https://github.com/juspay/neurolink/actions/workflows/ci.yml)
[](https://coveralls.io/github/juspay/neurolink?branch=main)
[](https://opensource.org/licenses/MIT)
[](https://www.typescriptlang.org/)
[](https://github.com/juspay/neurolink/stargazers)
[](https://discord.gg/neurolink)
Enterprise AI development platform with unified provider access, production-ready tooling, and an opinionated factory architecture. NeuroLink ships as both a TypeScript SDK and a professional CLI so teams can build, operate, and iterate on AI features quickly.
## π§ What is NeuroLink?
**NeuroLink is the universal AI integration platform that unifies 13 major AI providers and 100+ models under one consistent API.**
Extracted from production systems at Juspay and battle-tested at enterprise scale, NeuroLink provides a production-ready solution for integrating AI into any application. Whether you're building with OpenAI, Anthropic, Google, AWS Bedrock, Azure, or any of our 13 supported providers, NeuroLink gives you a single, consistent interface that works everywhere.
**Why NeuroLink?** Switch providers with a single parameter change, leverage 64+ built-in tools and MCP servers, deploy with confidence using enterprise features like Redis memory and multi-provider failover, and optimize costs automatically with intelligent routing. Use it via our professional CLI or TypeScript SDKβwhichever fits your workflow.
**Where we're headed:** We're building for the future of AIβedge-first execution and continuous streaming architectures that make AI practically free and universally available. **[Read our vision β](docs/about/vision.md)**
**[Get Started in <5 Minutes β](docs/getting-started/quick-start.md)**
---
## What's New (Q1 2026)
| Feature | Version | Description | Guide |
| ---------------------------------- | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------ |
| **Title Generation Events** | v8.38.0 | Emit `conversation:titleGenerated` event when conversation title is generated. Supports custom title prompts via `NEUROLINK_TITLE_PROMPT`. | [Conversation Memory Guide](docs/conversation-memory.md) |
| **Video Generation with Veo** | v8.32.0 | Video generation using Veo 3.1 (`veo-3.1`). Realistic video generation with many parameter options | [Video Generation Guide](docs/features/video-generation.md) |
| **Image Generation with Gemini** | v8.31.0 | Native image generation using Gemini 2.0 Flash Experimental (`imagen-3.0-generate-002`). High-quality image synthesis directly from Google AI. | [Image Generation Guide](docs/image-generation-streaming.md) |
| **HTTP/Streamable HTTP Transport** | v8.29.0 | Connect to remote MCP servers via HTTP with authentication headers, automatic retry with exponential backoff, and configurable rate limiting. | [HTTP Transport Guide](docs/mcp-http-transport.md) |
- **Title Generation Events** β Emit real-time events when conversation titles are auto-generated. Listen to `conversation:titleGenerated` for session tracking. β [Conversation Memory Guide](docs/conversation-memory.md#title-generation-events)
- **Custom Title Prompts** β Customize conversation title generation with `NEUROLINK_TITLE_PROMPT` environment variable. Use `${userMessage}` placeholder for dynamic prompts. β [Conversation Memory Guide](docs/conversation-memory.md#customizing-the-title-prompt)
- **Video Generation** β Transform images into 8-second videos with synchronized audio using Google Veo 3.1 via Vertex AI. Supports 720p/1080p resolutions, portrait/landscape aspect ratios. β [Video Generation Guide](docs/features/video-generation.md)
- **Image Generation** β Generate images from text prompts using Gemini models via Vertex AI or Google AI Studio. Supports streaming mode with automatic file saving. β [Image Generation Guide](docs/IMAGE-GENERATION-STREAMING.md)
- **HTTP/Streamable HTTP Transport for MCP** β Connect to remote MCP servers via HTTP with authentication headers, retry logic, and rate limiting. β [HTTP Transport Guide](docs/MCP-HTTP-TRANSPORT.md)
- π§ **Gemini 3 Preview Support** - Full support for gemini-3-flash-preview and gemini-3-pro-preview with extended thinking capabilities
- **Structured Output with Zod Schemas** β Type-safe JSON generation with automatic validation using `schema` + `output.format: "json"` in `generate()`. β [Structured Output Guide](docs/features/structured-output.md)
- **CSV File Support** β Attach CSV files to prompts for AI-powered data analysis with auto-detection. β [CSV Guide](docs/features/multimodal-chat.md#csv-file-support)
- **PDF File Support** β Process PDF documents with native visual analysis for Vertex AI, Anthropic, Bedrock, AI Studio. β [PDF Guide](docs/features/pdf-support.md)
- **LiteLLM Integration** β Access 100+ AI models from all major providers through unified interface. β [Setup Guide](docs/LITELLM-INTEGRATION.md)
- **SageMaker Integration** β Deploy and use custom trained models on AWS infrastructure. β [Setup Guide](docs/SAGEMAKER-INTEGRATION.md)
- **OpenRouter Integration** β Access 300+ models from OpenAI, Anthropic, Google, Meta, and more through a single unified API. β [Setup Guide](docs/getting-started/providers/openrouter.md)
- **Human-in-the-loop workflows** β Pause generation for user approval/input before tool execution. β [HITL Guide](docs/features/hitl.md)
- **Guardrails middleware** β Block PII, profanity, and unsafe content with built-in filtering. β [Guardrails Guide](docs/features/guardrails.md)
- **Context summarization** β Automatic conversation compression for long-running sessions. β [Summarization Guide](docs/CONTEXT-SUMMARIZATION.md)
- **Redis conversation export** β Export full session history as JSON for analytics and debugging. β [History Guide](docs/features/conversation-history.md)
```typescript
// Image Generation with Gemini (v8.31.0)
const image = await neurolink.generateImage({
prompt: "A futuristic cityscape",
provider: "google-ai",
model: "imagen-3.0-generate-002",
});
// HTTP Transport for Remote MCP (v8.29.0)
await neurolink.addExternalMCPServer("remote-tools", {
transport: "http",
url: "https://mcp.example.com/v1",
headers: { Authorization: "Bearer token" },
retries: 3,
timeout: 15000,
});
```
---
Previous Updates (Q4 2025)
- **Image Generation** β Generate images from text prompts using Gemini models via Vertex AI or Google AI Studio. β [Guide](docs/image-generation-streaming.md)
- **Gemini 3 Preview Support** - Full support for `gemini-3-flash-preview` and `gemini-3-pro-preview` with extended thinking
- **Structured Output with Zod Schemas** β Type-safe JSON generation with automatic validation. β [Guide](docs/features/structured-output.md)
- **CSV & PDF File Support** β Attach CSV/PDF files to prompts with auto-detection. β [CSV](docs/features/multimodal-chat.md#csv-file-support) | [PDF](docs/features/pdf-support.md)
- **LiteLLM & SageMaker** β Access 100+ models via LiteLLM, deploy custom models on SageMaker. β [LiteLLM](docs/litellm-integration.md) | [SageMaker](docs/sagemaker-integration.md)
- **OpenRouter Integration** β Access 300+ models through a single unified API. β [Guide](docs/getting-started/providers/openrouter.md)
- **HITL & Guardrails** β Human-in-the-loop approval workflows and content filtering middleware. β [HITL](docs/features/hitl.md) | [Guardrails](docs/features/guardrails.md)
- **Redis & Context Management** β Session export, conversation history, and automatic summarization. β [History](docs/features/conversation-history.md)
## Enterprise Security: Human-in-the-Loop (HITL)
NeuroLink includes a **production-ready HITL system** for regulated industries and high-stakes AI operations:
| Capability | Description | Use Case |
| --------------------------- | --------------------------------------------------------- | ------------------------------------------ |
| **Tool Approval Workflows** | Require human approval before AI executes sensitive tools | Financial transactions, data modifications |
| **Output Validation** | Route AI outputs through human review pipelines | Medical diagnosis, legal documents |
| **Confidence Thresholds** | Automatically trigger human review below confidence level | Critical business decisions |
| **Complete Audit Trail** | Full audit logging for compliance (HIPAA, SOC2, GDPR) | Regulated industries |
```typescript
import { NeuroLink } from "@juspay/neurolink";
const neurolink = new NeuroLink({
hitl: {
enabled: true,
requireApproval: ["writeFile", "executeCode", "sendEmail"],
confidenceThreshold: 0.85,
reviewCallback: async (action, context) => {
// Custom review logic - integrate with your approval system
return await yourApprovalSystem.requestReview(action);
},
},
});
// AI pauses for human approval before executing sensitive tools
const result = await neurolink.generate({
input: { text: "Send quarterly report to stakeholders" },
});
```
**[Enterprise HITL Guide](docs/features/enterprise-hitl.md)** | **[Quick Start](docs/features/hitl.md)**
## Get Started in Two Steps
```bash
# 1. Run the interactive setup wizard (select providers, validate keys)
pnpm dlx @juspay/neurolink setup
# 2. Start generating with automatic provider selection
npx @juspay/neurolink generate "Write a launch plan for multimodal chat"
```
Need a persistent workspace? Launch loop mode with `npx @juspay/neurolink loop` - [Learn more β](docs/features/cli-loop-sessions.md)
## π Complete Feature Set
NeuroLink is a comprehensive AI development platform. Every feature below is production-ready and fully documented.
### π€ AI Provider Integration
**13 providers unified under one API** - Switch providers with a single parameter change.
| Provider | Models | Free Tier | Tool Support | Status | Documentation |
| --------------------- | ---------------------------------- | --------------- | ------------ | ------------- | ----------------------------------------------------------------------- |
| **OpenAI** | GPT-4o, GPT-4o-mini, o1 | β | β
Full | β
Production | [Setup Guide](docs/getting-started/provider-setup.md#openai) |
| **Anthropic** | Claude 3.5/3.7 Sonnet, Opus | β | β
Full | β
Production | [Setup Guide](docs/getting-started/provider-setup.md#anthropic) |
| **Google AI Studio** | Gemini 2.5 Flash/Pro | β
Free Tier | β
Full | β
Production | [Setup Guide](docs/getting-started/provider-setup.md#google-ai) |
| **AWS Bedrock** | Claude, Titan, Llama, Nova | β | β
Full | β
Production | [Setup Guide](docs/getting-started/provider-setup.md#bedrock) |
| **Google Vertex** | Gemini 3/2.5 (gemini-3-\*-preview) | β | β
Full | β
Production | [Setup Guide](docs/getting-started/provider-setup.md#vertex) |
| **Azure OpenAI** | GPT-4, GPT-4o, o1 | β | β
Full | β
Production | [Setup Guide](docs/getting-started/provider-setup.md#azure) |
| **LiteLLM** | 100+ models unified | Varies | β
Full | β
Production | [Setup Guide](docs/litellm-integration.md) |
| **AWS SageMaker** | Custom deployed models | β | β
Full | β
Production | [Setup Guide](docs/sagemaker-integration.md) |
| **Mistral AI** | Mistral Large, Small | β
Free Tier | β
Full | β
Production | [Setup Guide](docs/getting-started/provider-setup.md#mistral) |
| **Hugging Face** | 100,000+ models | β
Free | β οΈ Partial | β
Production | [Setup Guide](docs/getting-started/provider-setup.md#huggingface) |
| **Ollama** | Local models (Llama, Mistral) | β
Free (Local) | β οΈ Partial | β
Production | [Setup Guide](docs/getting-started/provider-setup.md#ollama) |
| **OpenAI Compatible** | Any OpenAI-compatible endpoint | Varies | β
Full | β
Production | [Setup Guide](docs/getting-started/provider-setup.md#openai-compatible) |
**[π Provider Comparison Guide](docs/reference/provider-comparison.md)** - Detailed feature matrix and selection criteria
**[π¬ Provider Feature Compatibility](docs/reference/provider-feature-compatibility.md)** - Test-based compatibility reference for all 19 features across 13 providers
---
### π§ Built-in Tools & MCP Integration
**6 Core Tools** (work across all providers, zero configuration):
| Tool | Purpose | Auto-Available | Documentation |
| -------------------- | ------------------------ | ----------------------- | --------------------------------------------------------- |
| `getCurrentTime` | Real-time clock access | β
| [Tool Reference](docs/sdk/custom-tools.md#getCurrentTime) |
| `readFile` | File system reading | β
| [Tool Reference](docs/sdk/custom-tools.md#readFile) |
| `writeFile` | File system writing | β
| [Tool Reference](docs/sdk/custom-tools.md#writeFile) |
| `listDirectory` | Directory listing | β
| [Tool Reference](docs/sdk/custom-tools.md#listDirectory) |
| `calculateMath` | Mathematical operations | β
| [Tool Reference](docs/sdk/custom-tools.md#calculateMath) |
| `websearchGrounding` | Google Vertex web search | β οΈ Requires credentials | [Tool Reference](docs/sdk/custom-tools.md#websearch) |
**58+ External MCP Servers** supported (GitHub, PostgreSQL, Google Drive, Slack, and more):
```typescript
// stdio transport - local MCP servers via command execution
await neurolink.addExternalMCPServer("github", {
command: "npx",
args: ["-y", "@modelcontextprotocol/server-github"],
transport: "stdio",
env: { GITHUB_TOKEN: process.env.GITHUB_TOKEN },
});
// HTTP transport - remote MCP servers via URL
await neurolink.addExternalMCPServer("github-copilot", {
transport: "http",
url: "https://api.githubcopilot.com/mcp",
headers: { Authorization: "Bearer YOUR_COPILOT_TOKEN" },
timeout: 15000,
retries: 5,
});
// Tools automatically available to AI
const result = await neurolink.generate({
input: { text: 'Create a GitHub issue titled "Bug in auth flow"' },
});
```
**MCP Transport Options:**
| Transport | Use Case | Key Features |
| ----------- | -------------- | ----------------------------------------------- |
| `stdio` | Local servers | Command execution, environment variables |
| `http` | Remote servers | URL-based, auth headers, retries, rate limiting |
| `sse` | Event streams | Server-Sent Events, real-time updates |
| `websocket` | Bi-directional | Full-duplex communication |
**[π MCP Integration Guide](docs/advanced/mcp-integration.md)** - Setup external servers
**[π HTTP Transport Guide](docs/mcp-http-transport.md)** - Remote MCP server configuration
---
### π» Developer Experience Features
**SDK-First Design** with TypeScript, IntelliSense, and type safety:
| Feature | Description | Documentation |
| --------------------------- | ------------------------------------------------------------- | --------------------------------------------------------- |
| **Auto Provider Selection** | Intelligent provider fallback | [SDK Guide](docs/sdk/index.md#auto-selection) |
| **Streaming Responses** | Real-time token streaming | [Streaming Guide](docs/advanced/streaming.md) |
| **Conversation Memory** | Automatic context management | [Memory Guide](docs/sdk/index.md#memory) |
| **Full Type Safety** | Complete TypeScript types | [Type Reference](docs/sdk/api-reference.md) |
| **Error Handling** | Graceful provider fallback | [Error Guide](docs/reference/troubleshooting.md) |
| **Analytics & Evaluation** | Usage tracking, quality scores | [Analytics Guide](docs/advanced/analytics.md) |
| **Middleware System** | Request/response hooks | [Middleware Guide](docs/custom-middleware-guide.md) |
| **Framework Integration** | Next.js, SvelteKit, Express | [Framework Guides](docs/sdk/framework-integration.md) |
| **Extended Thinking** | Native thinking/reasoning mode for Gemini 3 and Claude models | [Thinking Guide](docs/features/thinking-configuration.md) |
---
### π’ Enterprise & Production Features
**Production-ready capabilities for regulated industries:**
| Feature | Description | Use Case | Documentation |
| --------------------------- | ---------------------------------- | ------------------------- | ----------------------------------------------------------- |
| **Enterprise Proxy** | Corporate proxy support | Behind firewalls | [Proxy Setup](docs/enterprise-proxy-setup.md) |
| **Redis Memory** | Distributed conversation state | Multi-instance deployment | [Redis Guide](docs/getting-started/provider-setup.md#redis) |
| **Cost Optimization** | Automatic cheapest model selection | Budget control | [Cost Guide](docs/advanced/index.md) |
| **Multi-Provider Failover** | Automatic provider switching | High availability | [Failover Guide](docs/advanced/index.md) |
| **Telemetry & Monitoring** | OpenTelemetry integration | Observability | [Telemetry Guide](docs/telemetry-guide.md) |
| **Security Hardening** | Credential management, auditing | Compliance | [Security Guide](docs/advanced/enterprise.md) |
| **Custom Model Hosting** | SageMaker integration | Private models | [SageMaker Guide](docs/sagemaker-integration.md) |
| **Load Balancing** | LiteLLM proxy integration | Scale & routing | [Load Balancing](docs/litellm-integration.md) |
**Security & Compliance:**
- β
SOC2 Type II compliant deployments
- β
ISO 27001 certified infrastructure compatible
- β
GDPR-compliant data handling (EU providers available)
- β
HIPAA compatible (with proper configuration)
- β
Hardened OS verified (SELinux, AppArmor)
- β
Zero credential logging
- β
Encrypted configuration storage
**[π Enterprise Deployment Guide](docs/advanced/enterprise.md)** - Complete production checklist
---
## Enterprise Persistence: Redis Memory
Production-ready distributed conversation state for multi-instance deployments:
### Capabilities
| Feature | Description | Benefit |
| ---------------------- | -------------------------------------------- | --------------------------- |
| **Distributed Memory** | Share conversation context across instances | Horizontal scaling |
| **Session Export** | Export full history as JSON | Analytics, debugging, audit |
| **Auto-Detection** | Automatic Redis discovery from environment | Zero-config in containers |
| **Graceful Failover** | Falls back to in-memory if Redis unavailable | High availability |
| **TTL Management** | Configurable session expiration | Memory management |
### Quick Setup
```typescript
import { NeuroLink } from "@juspay/neurolink";
// Auto-detect Redis from REDIS_URL environment variable
const neurolink = new NeuroLink({
conversationMemory: {
enabled: true,
store: "redis", // Automatically uses REDIS_URL
ttl: 86400, // 24-hour session expiration
},
});
// Or explicit configuration
const neuriolinkExplicit = new NeuroLink({
conversationMemory: {
enabled: true,
store: "redis",
redis: {
host: "redis.example.com",
port: 6379,
password: process.env.REDIS_PASSWORD,
tls: true, // Enable for production
},
},
});
// Export conversation for analytics
const history = await neurolink.exportConversation({ format: "json" });
await saveToDataWarehouse(history);
```
### Docker Quick Start
```bash
# Start Redis
docker run -d --name neurolink-redis -p 6379:6379 redis:7-alpine
# Configure NeuroLink
export REDIS_URL=redis://localhost:6379
# Start your application
node your-app.js
```
**[Redis Setup Guide](docs/getting-started/redis-quickstart.md)** | **[Production Configuration](docs/guides/redis-configuration.md)** | **[Migration Patterns](docs/guides/redis-migration.md)**
---
### π¨ Professional CLI
**15+ commands** for every workflow:
| Command | Purpose | Example | Documentation |
| ---------- | ---------------------------------- | -------------------------- | ----------------------------------------- |
| `setup` | Interactive provider configuration | `neurolink setup` | [Setup Guide](docs/cli/index.md) |
| `generate` | Text generation | `neurolink gen "Hello"` | [Generate](docs/cli/commands.md#generate) |
| `stream` | Streaming generation | `neurolink stream "Story"` | [Stream](docs/cli/commands.md#stream) |
| `status` | Provider health check | `neurolink status` | [Status](docs/cli/commands.md#status) |
| `loop` | Interactive session | `neurolink loop` | [Loop](docs/cli/commands.md#loop) |
| `mcp` | MCP server management | `neurolink mcp discover` | [MCP CLI](docs/cli/commands.md#mcp) |
| `models` | Model listing | `neurolink models` | [Models](docs/cli/commands.md#models) |
| `eval` | Model evaluation | `neurolink eval` | [Eval](docs/cli/commands.md#eval) |
**[π Complete CLI Reference](docs/cli/commands.md)** - All commands and options
---
### π€ GitHub Action
Run AI-powered workflows directly in GitHub Actions with 13 provider support and automatic PR/issue commenting.
```yaml
- uses: juspay/neurolink@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
prompt: "Review this PR for security issues and code quality"
post_comment: true
```
| Feature | Description |
| ---------------------- | ----------------------------------------------- |
| **Multi-Provider** | 13 providers with unified interface |
| **PR/Issue Comments** | Auto-post AI responses with intelligent updates |
| **Multimodal Support** | Attach images, PDFs, CSVs to prompts |
| **Cost Tracking** | Built-in analytics and quality evaluation |
| **Extended Thinking** | Deep reasoning with thinking tokens |
**[π GitHub Action Guide](docs/guides/github-action.md)** - Complete setup and examples
---
## π° Smart Model Selection
NeuroLink features intelligent model selection and cost optimization:
### Cost Optimization Features
- **π° Automatic Cost Optimization**: Selects cheapest models for simple tasks
- **π LiteLLM Model Routing**: Access 100+ models with automatic load balancing
- **π Capability-Based Selection**: Find models with specific features (vision, function calling)
- **β‘ Intelligent Fallback**: Seamless switching when providers fail
```bash
# Cost optimization - automatically use cheapest model
npx @juspay/neurolink generate "Hello" --optimize-cost
# LiteLLM specific model selection
npx @juspay/neurolink generate "Complex analysis" --provider litellm --model "anthropic/claude-3-5-sonnet"
# Auto-select best available provider
npx @juspay/neurolink generate "Write code" # Automatically chooses optimal provider
```
## Revolutionary Interactive CLI
NeuroLink's CLI goes beyond simple commands - it's a **full AI development environment**:
### Why Interactive Mode Changes Everything
| Feature | Traditional CLI | NeuroLink Interactive |
| ------------- | ----------------- | ------------------------------ |
| Session State | None | Full persistence |
| Memory | Per-command | Conversation-aware |
| Configuration | Flags per command | `/set` persists across session |
| Tool Testing | Manual per tool | Live discovery & testing |
| Streaming | Optional | Real-time default |
### Live Demo: Development Session
```bash
$ npx @juspay/neurolink loop --enable-conversation-memory
neurolink > /set provider vertex
β provider set to vertex (Gemini 3 support enabled)
neurolink > /set model gemini-3-flash-preview
β model set to gemini-3-flash-preview
neurolink > Analyze my project architecture and suggest improvements
β Analyzing your project structure...
[AI provides detailed analysis, remembering context]
neurolink > Now implement the first suggestion
[AI remembers previous context and implements suggestion]
neurolink > /mcp discover
β Discovered 58 MCP tools:
GitHub: create_issue, list_repos, create_pr...
PostgreSQL: query, insert, update...
[full list]
neurolink > Use the GitHub tool to create an issue for this improvement
β Creating issue... (requires HITL approval if configured)
neurolink > /export json > session-2026-01-01.json
β Exported 15 messages to session-2026-01-01.json
neurolink > exit
Session saved. Resume with: neurolink loop --session session-2026-01-01.json
```
### Session Commands Reference
| Command | Purpose |
| -------------------- | ---------------------------------------------------- |
| `/set ` | Persist configuration (provider, model, temperature) |
| `/mcp discover` | List all available MCP tools |
| `/export json` | Export conversation to JSON |
| `/history` | View conversation history |
| `/clear` | Clear context while keeping settings |
**[Interactive CLI Guide](docs/features/interactive-cli.md)** | **[CLI Reference](docs/cli/commands.md)**
Skip the wizard and configure manually? See [`docs/getting-started/provider-setup.md`](docs/getting-started/provider-setup.md).
## CLI & SDK Essentials
`neurolink` CLI mirrors the SDK so teams can script experiments and codify them later.
```bash
# Discover available providers and models
npx @juspay/neurolink status
npx @juspay/neurolink models list --provider google-ai
# Route to a specific provider/model
npx @juspay/neurolink generate "Summarize customer feedback" \
--provider azure --model gpt-4o-mini
# Turn on analytics + evaluation for observability
npx @juspay/neurolink generate "Draft release notes" \
--enable-analytics --enable-evaluation --format json
```
```typescript
import { NeuroLink } from "@juspay/neurolink";
const neurolink = new NeuroLink({
conversationMemory: {
enabled: true,
store: "redis",
},
enableOrchestration: true,
});
const result = await neurolink.generate({
input: {
text: "Create a comprehensive analysis",
files: [
"./sales_data.csv", // Auto-detected as CSV
"examples/data/invoice.pdf", // Auto-detected as PDF
"./diagrams/architecture.png", // Auto-detected as image
],
},
provider: "vertex", // PDF-capable provider (see docs/features/pdf-support.md)
enableEvaluation: true,
region: "us-east-1",
});
console.log(result.content);
console.log(result.evaluation?.overallScore);
```
### Gemini 3 with Extended Thinking
```typescript
import { NeuroLink } from "@juspay/neurolink";
const neurolink = new NeuroLink();
// Use Gemini 3 with extended thinking for complex reasoning
const result = await neurolink.generate({
input: {
text: "Solve this step by step: What is the optimal strategy for...",
},
provider: "vertex",
model: "gemini-3-flash-preview",
thinkingLevel: "medium", // Options: "minimal", "low", "medium", "high"
});
console.log(result.content);
```
Full command and API breakdown lives in [`docs/cli/commands.md`](docs/cli/commands.md) and [`docs/sdk/api-reference.md`](docs/sdk/api-reference.md).
## Platform Capabilities at a Glance
| Capability | Highlights |
| ------------------------ | ------------------------------------------------------------------------------------------------------------------------ |
| **Provider unification** | 13+ providers with automatic fallback, cost-aware routing, provider orchestration (Q3). |
| **Multimodal pipeline** | Stream images + CSV data + PDF documents across providers with local/remote assets. Auto-detection for mixed file types. |
| **Quality & governance** | Auto-evaluation engine (Q3), guardrails middleware (Q4), HITL workflows (Q4), audit logging. |
| **Memory & context** | Conversation memory, Mem0 integration, Redis history export (Q4), context summarization (Q4). |
| **CLI tooling** | Loop sessions (Q3), setup wizard, config validation, Redis auto-detect, JSON output. |
| **Enterprise ops** | Proxy support, regional routing (Q3), telemetry hooks, configuration management. |
| **Tool ecosystem** | MCP auto discovery, HTTP/stdio/SSE/WebSocket transports, LiteLLM hub access, SageMaker custom deployment, web search. |
## Documentation Map
| Area | When to Use | Link |
| --------------- | ----------------------------------------------------- | ---------------------------------------------------------------- |
| Getting started | Install, configure, run first prompt | [`docs/getting-started/index.md`](docs/getting-started/index.md) |
| Feature guides | Understand new functionality front-to-back | [`docs/features/index.md`](docs/features/index.md) |
| CLI reference | Command syntax, flags, loop sessions | [`docs/cli/index.md`](docs/cli/index.md) |
| SDK reference | Classes, methods, options | [`docs/sdk/index.md`](docs/sdk/index.md) |
| Integrations | LiteLLM, SageMaker, MCP, Mem0 | [`docs/litellm-integration.md`](docs/litellm-integration.md) |
| Advanced | Middleware, architecture, streaming patterns | [`docs/advanced/index.md`](docs/advanced/index.md) |
| Cookbook | Practical recipes for common patterns | [`docs/cookbook/index.md`](docs/cookbook/index.md) |
| Guides | Migration, Redis, troubleshooting, provider selection | [`docs/guides/index.md`](docs/guides/index.md) |
| Operations | Configuration, troubleshooting, provider matrix | [`docs/reference/index.md`](docs/reference/index.md) |
### New in 2026: Enhanced Documentation
**Enterprise Features:**
- [Enterprise HITL Guide](docs/features/enterprise-hitl.md) - Production-ready approval workflows
- [Interactive CLI Guide](docs/features/interactive-cli.md) - AI development environment
- [MCP Tools Showcase](docs/features/mcp-tools-showcase.md) - 58+ external tools & 6 built-in tools
**Provider Intelligence:**
- [Provider Capabilities Audit](docs/reference/provider-capabilities-audit.md) - Technical capabilities matrix
- [Provider Selection Guide](docs/guides/provider-selection.md) - Interactive decision wizard
- [Provider Comparison](docs/reference/provider-comparison.md) - Feature & cost comparison
**Middleware System:**
- [Middleware Architecture](docs/advanced/middleware-architecture.md) - Complete lifecycle & patterns
- [Built-in Middleware](docs/advanced/builtin-middleware.md) - Analytics, Guardrails, Evaluation
- [Custom Middleware Guide](docs/custom-middleware-guide.md) - Build your own
**Redis & Persistence:**
- [Redis Quick Start](docs/getting-started/redis-quickstart.md) - 5-minute setup
- [Redis Configuration](docs/guides/redis-configuration.md) - Production-ready setup
- [Redis Migration](docs/guides/redis-migration.md) - Migration patterns
**Migration Guides:**
- [From LangChain](docs/guides/migration/from-langchain.md) - Complete migration guide
- [From Vercel AI SDK](docs/guides/migration/from-vercel-ai-sdk.md) - Next.js focused
**Developer Experience:**
- [Cookbook](docs/cookbook/index.md) - 10 practical recipes
- [Troubleshooting Guide](docs/guides/troubleshooting.md) - Common issues & solutions
## Integrations
- **LiteLLM 100+ model hub** β Unified access to third-party models via LiteLLM routing. β [`docs/litellm-integration.md`](docs/litellm-integration.md)
- **Amazon SageMaker** β Deploy and call custom endpoints directly from NeuroLink CLI/SDK. β [`docs/sagemaker-integration.md`](docs/sagemaker-integration.md)
- **Mem0 conversational memory** β Persistent semantic memory with vector store support. β [`docs/mem0-integration.md`](docs/mem0-integration.md)
- **Enterprise proxy & security** β Configure outbound policies and compliance posture. β [`docs/enterprise-proxy-setup.md`](docs/enterprise-proxy-setup.md)
- **Configuration automation** β Manage environments, regions, and credentials safely. β [`docs/configuration-management.md`](docs/configuration-management.md)
- **MCP tool ecosystem** β Auto-discover Model Context Protocol tools and extend workflows. β [`docs/advanced/mcp-integration.md`](docs/advanced/mcp-integration.md)
- **Remote MCP via HTTP** β Connect to HTTP-based MCP servers with authentication, retries, and rate limiting. β [`docs/mcp-http-transport.md`](docs/mcp-http-transport.md)
## Contributing & Support
- Bug reports and feature requests β [GitHub Issues](https://github.com/juspay/neurolink/issues)
- Development workflow, testing, and pull request guidelines β [`docs/development/contributing.md`](docs/development/contributing.md)
- Documentation improvements β open a PR referencing the [documentation matrix](docs/tracking/FEATURE-DOC-MATRIX.md).
---
NeuroLink is built with β€οΈ by Juspay. Contributions, questions, and production feedback are always welcome.