{"id":47005105,"url":"https://github.com/riktar/slang","last_synced_at":"2026-03-17T01:01:05.111Z","repository":{"id":343522679,"uuid":"1178082290","full_name":"riktar/slang","owner":"riktar","description":"A declarative meta-language for orchestrating multi-agent workflows. Readable by humans. Executable by LLMs. Portable across models.","archived":false,"fork":false,"pushed_at":"2026-03-13T10:11:42.000Z","size":1084,"stargazers_count":10,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"master","last_synced_at":"2026-03-14T08:05:17.900Z","etag":null,"topics":["agentic-ai","agentic-ai-development","agentic-engineering","agentic-workflow","llm-agent","meta-language"],"latest_commit_sha":null,"homepage":"https://riktar.github.io/slang/","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/riktar.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-03-10T17:07:07.000Z","updated_at":"2026-03-13T10:16:16.000Z","dependencies_parsed_at":null,"dependency_job_id":"de48f89a-5366-4042-8192-7d6ee1c8f69c","html_url":"https://github.com/riktar/slang","commit_stats":null,"previous_names":["riktar/slang"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/riktar/slang","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/riktar%2Fslang","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/riktar%2Fslang/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/riktar%2Fslang/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/riktar%2Fslang/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/riktar","download_url":"https://codeload.github.com/riktar/slang/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/riktar%2Fslang/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":30556384,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-15T23:30:23.986Z","status":"ssl_error","status_checked_at":"2026-03-15T23:28:43.564Z","response_time":61,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agentic-ai","agentic-ai-development","agentic-engineering","agentic-workflow","llm-agent","meta-language"],"created_at":"2026-03-11T20:09:30.113Z","updated_at":"2026-03-16T00:01:08.349Z","avatar_url":"https://github.com/riktar.png","language":"TypeScript","readme":"\u003ch1 align=\"center\"\u003e🗣️ SLANG\u003c/h1\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003cstrong\u003eThe SQL of AI agents.\u003c/strong\u003e\u003cbr/\u003e\n  A declarative meta-language for orchestrating multi-agent workflows.\u003cbr/\u003e\n  Readable by humans. Executable by LLMs. Portable across models.\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"#quick-start\"\u003eQuick Start\u003c/a\u003e •\n  \u003ca href=\"#zero-setup\"\u003eZero Setup\u003c/a\u003e •\n  \u003ca href=\"#examples\"\u003eExamples\u003c/a\u003e •\n  \u003ca href=\"#cli\"\u003eCLI\u003c/a\u003e •\n  \u003ca href=\"#api\"\u003eAPI\u003c/a\u003e •\n  \u003ca href=\"#playground\"\u003ePlayground\u003c/a\u003e •\n  \u003ca href=\"#mcp-server\"\u003eMCP Server\u003c/a\u003e •\n  \u003ca href=\"SPEC.md\"\u003eSpec\u003c/a\u003e •\n  \u003ca href=\"GRAMMAR.md\"\u003eGrammar\u003c/a\u003e\n\u003c/p\u003e\n\n---\n\n## Quick Start\n\n### 1. Install\n\n```bash\nnpm install -g @riktar/slang\n```\n\n### 2. Create a project\n\n```bash\nslang init my-project\ncd my-project\n```\n\nThis generates `hello.slang`, `research.slang`, `tools.js`, and `.env.example`.\n\n### 3. Configure (optional)\n\n```bash\ncp .env.example .env\n# Edit .env with your API key — SLANG loads it automatically\n```\n\n### 4. Run\n\n```bash\nslang run hello.slang                                    # echo adapter (no API key needed)\nslang run hello.slang --adapter openrouter               # uses OPENROUTER_API_KEY from .env\nslang run research.slang --adapter openai --tools tools.js\n```\n\n### 5. Explore the playground\n\n```bash\nslang playground\n# Open http://localhost:5174 — edit, visualize, and run flows in the browser\n```\n\n---\n\n## SLANG is not a framework.\n\n\u003e SLANG is the acronymous for \u003cstrong\u003eSuper Language for Agent Negotiation \u0026 Governance\u003c/strong\u003e\n\nFrameworks like LangChain, CrewAI, and AutoGen are SDKs — Python/TypeScript libraries with classes, decorators, and configuration files. SLANG is none of those things.\n\n**SLANG is a language.** Like SQL is a language for querying data, SLANG is a language for orchestrating agents.\n\n| SQL | SLANG |\n|-----|-------|\n| Didn't replace C/Java for business logic | Doesn't replace TypeScript/Python for complex pipelines |\n| Created a new category: declarative queries | Creates a new category: declarative agent orchestration |\n| Anyone reads it, anyone understands it | Anyone reads a `.slang` file, anyone understands the workflow |\n| Portable: same SQL runs on Postgres, MySQL, SQLite | Portable: same `.slang` runs on GPT, Claude, Llama, Gemini — via OpenRouter, 300+ models with one API key |\n| LLMs generate it natively (text-to-SQL) | LLMs generate it natively (text-to-SLANG) |\n| Not Turing-complete — and that's the point | Not general-purpose — and that's the point |\n\n---\n\n## Three primitives. That's it.\n\n```\nstake   →  produce content and send it to an agent\nawait   →  block until another agent sends you data\ncommit  →  accept the result and stop\n```\n\nEvery multi-agent workflow — pipelines, DAGs, loops, reviews, escalations — is a combination of these three operations. Nothing else to learn. An LLM picks it up in 30 seconds. Your PM reads it without documentation.\n\nCompare: CrewAI has 50+ classes. LangGraph needs decorators, typed state, and YAML config. SLANG has three words.\n\n---\n\n## Zero Setup\n\nNo install. No API key. No runtime.\n\n1. Copy the [system prompt](ZERO_SETUP_PROMPT.md)\n2. Paste it into ChatGPT, Claude, Gemini — any LLM\n3. Paste a `.slang` flow\n4. It runs.\n\nThe LLM **is** the runtime. No `pip install`, no `npm install`, no configuration. This is something no SDK can offer — because an SDK requires an SDK.\n\n---\n\n## Same flow, any model.\n\n```\nflow \"hybrid-analysis\" {\n  agent Researcher {\n    model: \"gpt-4o\"              -- routed to OpenAI\n    tools: [web_search]\n    stake gather(topic: \"quantum computing\") -\u003e @Analyst\n  }\n  agent Analyst {\n    model: \"claude-sonnet\"       -- routed to Anthropic\n    await data \u003c- @Researcher\n    stake analyze(data) -\u003e @out\n    commit\n  }\n  converge when: all_committed\n}\n```\n\nThe same `.slang` file runs on GPT-4o, Claude, Llama via Ollama, or **300+ models via [OpenRouter](https://openrouter.ai)** with a single API key. With the router adapter, **different agents use different providers in the same execution**. No vendor lock-in. Switch models by changing one line.\n\n---\n\n## Human-readable by design.\n\nRead this flow out loud:\n\n\u003e *\"The Researcher stakes gather on the competitors and sends it to the Analyst. The Analyst awaits the data, analyzes it, and sends to the Critic. The Critic challenges the analysis and sends feedback back. If the confidence is high enough, commit. Otherwise, escalate to a Human.\"*\n\nNo diagrams. No comments. No onboarding. **A `.slang` file is its own documentation.**\n\n---\n\n## Who is SLANG for?\n\n### PMs, analysts, researchers — no code needed\n\nOrchestrate AI agents by describing what you want. Paste a flow into ChatGPT and it runs. Like Zapier democratized integrations, SLANG democratizes multi-agent AI.\n\n### Developers prototyping fast\n\nPrototype a multi-agent workflow in 10 lines, run it in 60 seconds. Then decide if you need a full SDK. SLANG is the napkin sketch that actually executes.\n\n### Platform teams building agent products\n\nA portable format for agent workflows. The Dockerfile of AI orchestration. Share `.slang` files across teams, import flows like packages, run them on any backend.\n\n---\n\n## Examples\n\n### Minimal — Hello World\n\n```\nflow \"hello\" {\n  agent Greeter {\n    stake greet(\"world\") -\u003e @out\n    commit\n  }\n  converge when: all_committed\n}\n```\n\n### Writer/Reviewer loop with conditionals\n\n```\nflow \"article\" {\n  agent Writer {\n    role: \"Technical writer specializing in clear, concise articles\"\n    model: \"gpt-4o\"\n    retry: 2\n\n    stake write(topic: \"Why multi-agent systems need a standard language\") -\u003e @Reviewer\n    await feedback \u003c- @Reviewer\n\n    when feedback.approved {\n      commit feedback\n    }\n    when feedback.rejected {\n      stake revise(feedback) -\u003e @Reviewer\n    }\n  }\n\n  agent Reviewer {\n    role: \"Senior editor focused on clarity, accuracy, and completeness\"\n    model: \"claude-sonnet\"\n\n    await draft \u003c- @Writer\n    stake review(draft, criteria: [\"clarity\", \"accuracy\", \"completeness\"]) -\u003e @Writer\n      output: { approved: \"boolean\", score: \"number\", notes: \"string\" }\n  }\n\n  converge when: committed_count \u003e= 1\n  budget: rounds(3)\n}\n```\n\nFeatures shown: `role:`, `model:`, `retry:`, `when` blocks, `output:` schema, `converge`, `budget`.\n\n### Competitive research with escalation and tools\n\n```\nflow \"competitive-research\" {\n  agent Researcher {\n    role: \"Expert web researcher focused on primary sources and data\"\n    model: \"openai/gpt-4o\"\n    tools: [web_search]\n    retry: 3\n\n    stake gather(competitors: [\"OpenAI\", \"Anthropic\", \"Google DeepMind\"],\n                 focus: \"AI agent frameworks\") -\u003e @Analyst\n  }\n\n  agent Analyst {\n    role: \"Strategic analyst specializing in competitive positioning\"\n    model: \"anthropic/claude-sonnet-4-20250514\"\n    await data \u003c- @Researcher\n    stake analyze(data, framework: \"SWOT\") -\u003e @Critic\n      output: { strengths: \"string\", weaknesses: \"string\", score: \"number\" }\n    await verdict \u003c- @Critic\n\n    commit verdict if verdict.confidence \u003e 0.7\n    escalate @Human reason: \"Analysis confidence too low, need human review\" if verdict.confidence \u003c= 0.7\n  }\n\n  agent Critic {\n    role: \"Adversarial reviewer who challenges assumptions\"\n    model: \"google/gemini-2.5-pro\"\n    await analysis \u003c- @Analyst\n    stake challenge(analysis, mode: \"steelmanning\") -\u003e @Analyst\n  }\n\n  converge when: committed_count \u003e= 1\n  budget: tokens(40000), rounds(4)\n}\n```\n\nFeatures shown: `model:` with OpenRouter model IDs (3 different providers in same flow), `tools:`, `retry:`, `output:`, `escalate @Human`, `if` conditions, `tokens` + `rounds` budget.\n\n### Broadcast and multi-source aggregation\n\n```\nflow \"parallel-report\" {\n  agent Coordinator {\n    role: \"Project coordinator who distributes tasks and compiles results\"\n    stake assign(sections: [\"market\", \"technology\", \"finance\"]) -\u003e @all\n    await results \u003c- *\n    stake compile(results) -\u003e @out\n    commit\n  }\n\n  agent MarketAnalyst {\n    role: \"Market research specialist\"\n    await task \u003c- @Coordinator\n    stake research(task, focus: \"market trends and sizing\") -\u003e @Coordinator\n  }\n\n  agent TechAnalyst {\n    role: \"Technology trend analyst\"\n    await task \u003c- @Coordinator\n    stake research(task, focus: \"technology landscape and innovation\") -\u003e @Coordinator\n  }\n\n  agent FinanceAnalyst {\n    role: \"Financial analyst specializing in projections\"\n    await task \u003c- @Coordinator\n    stake research(task, focus: \"financial projections and unit economics\") -\u003e @Coordinator\n  }\n\n  converge when: all_committed\n  budget: rounds(3)\n}\n```\n\nFeatures shown: `@all` broadcast, `*` wildcard source, 4 parallel agents, coordinator pattern.\n\n### Code review with tools and structured output\n\n```\nflow \"code-review\" {\n  agent Developer {\n    role: \"Senior software engineer\"\n    tools: [code_exec]\n    retry: 2\n\n    stake implement(spec: \"REST API endpoint for user registration\",\n                    language: \"TypeScript\") -\u003e @Reviewer\n      output: { code: \"string\", tests: \"string\", language: \"string\" }\n    await feedback \u003c- @Reviewer\n\n    when feedback.approved {\n      commit feedback\n    }\n    when feedback.rejected {\n      stake revise(feedback.notes, original: feedback) -\u003e @Reviewer\n        output: { code: \"string\", tests: \"string\", language: \"string\" }\n    }\n  }\n\n  agent Reviewer {\n    role: \"Staff engineer focused on security, performance, and best practices\"\n    tools: [code_exec]\n\n    await code \u003c- @Developer\n    stake review(code, checks: [\"security\", \"performance\", \"error handling\"]) -\u003e @Developer\n      output: { approved: \"boolean\", score: \"number\", notes: \"string\" }\n  }\n\n  converge when: committed_count \u003e= 1\n  budget: rounds(4)\n}\n```\n\nFeatures shown: `tools: [code_exec]`, `output:` on multiple stakes, `when` blocks, review loop pattern.\n\n### Composition — importing flows\n\n```\nflow \"full-report\" {\n  import \"research\" as research_flow\n  import \"article\" as article_flow\n\n  agent Orchestrator {\n    stake run(research_flow, topic: \"AI agents market 2026\") -\u003e @Compiler\n    stake run(article_flow, topic: \"Executive summary\") -\u003e @Compiler\n  }\n\n  agent Compiler {\n    await results \u003c- @Orchestrator (count: 2)\n    stake compile(results, format: \"executive briefing\") -\u003e @out\n    commit\n  }\n\n  converge when: all_committed\n  budget: rounds(5)\n}\n```\n\nFeatures shown: `import ... as`, flow composition, `count:` on await, orchestration pattern.\n\n---\n\n## CLI\n\n```bash\nslang init [dir]             # Scaffold a new SLANG project\nslang run \u003cfile.slang\u003e       # Execute a flow\nslang parse \u003cfile.slang\u003e     # Dump AST (syntax validation)\nslang check \u003cfile.slang\u003e     # Dependency analysis + deadlock detection\nslang prompt                 # Print the zero-setup system prompt\nslang playground             # Launch the web playground\n```\n\n### Options\n\n| Flag | Description |\n|------|-------------|\n| `--adapter` | `openai` \\| `anthropic` \\| `openrouter` \\| `echo` (CLI only; MCP default is `sampling`) |\n| `--api-key` | LLM API key (not required with `sampling`) |\n| `--model` | Model name (e.g. `gpt-4o`, `claude-sonnet-4-20250514`, `openai/gpt-4o`) |\n| `--base-url` | Custom endpoint (Ollama, local models — OpenAI adapter only) |\n| `--tools` | JS/TS file exporting tool handlers (see [Functional Tools](#functional-tools)) |\n| `--port` | Playground server port (default: `5174`) |\n\n### Environment Variables\n\nThe CLI loads a `.env` file from the current directory automatically. No extra setup — just create the file.\n\n```env\nSLANG_ADAPTER=openrouter\nOPENROUTER_API_KEY=sk-or-...\nSLANG_MODEL=openai/gpt-4o\n```\n\n| Variable | Description |\n|----------|-------------|\n| `SLANG_ADAPTER` | `sampling` (default in MCP) \\| `openai` \\| `anthropic` \\| `openrouter` \\| `echo` |\n| `SLANG_API_KEY` | API key (falls back to `OPENAI_API_KEY` / `ANTHROPIC_API_KEY` / `OPENROUTER_API_KEY`). Not needed with `sampling`. |\n| `SLANG_MODEL` | Default model override |\n| `SLANG_BASE_URL` | Custom base URL for OpenAI-compatible endpoints |\n\nReal environment variables take precedence over `.env` values. `slang init` generates a `.env.example` template.\n\n---\n\n## Playground\n\nSLANG ships a built-in web playground for writing, parsing, and running flows interactively in the browser.\n\n```bash\nslang playground              # Start on default port 5174\nslang playground --port 3000  # Custom port\n```\n\nFeatures:\n- **Editor** — write SLANG with real-time parsing and inline error display\n- **Dependency graph** — SVG visualization with color-coded nodes (green = ready, amber = blocked, red = deadlocked)\n- **AST viewer** — inspect the parsed syntax tree as JSON\n- **Run panel** — execute flows with the echo adapter and see streaming events live\n- **Examples** — dropdown with built-in sample flows (hello, review, research, broadcast, deadlock)\n- **Error recovery** — uses `parseWithRecovery()` to show all errors at once, not just the first\n\nThe playground runs entirely in the browser (no API key needed) using the echo adapter.\n\n---\n\n## API\n\nSLANG is also a TypeScript/JavaScript library:\n\n```typescript\nimport { parse, runFlow, createOpenAIAdapter } from '@riktar/slang'\n\nconst source = `\n  flow \"hello\" {\n    agent Greeter {\n      stake greet(\"world\") -\u003e @out\n      commit\n    }\n    converge when: all_committed\n  }\n`\n\n// Parse to AST\nconst ast = parse(source)\n\n// Execute with an LLM\nconst state = await runFlow(source, {\n  adapter: createOpenAIAdapter({ apiKey: process.env.OPENAI_API_KEY }),\n  onEvent: (event) =\u003e console.log(event),\n})\n\nconsole.log(state.status)   // \"converged\"\nconsole.log(state.outputs)  // [\"Hello, world! ...\"]\n```\n\n### Adapters\n\n```typescript\nimport {\n  createOpenAIAdapter,       // OpenAI / Ollama / any OpenAI-compatible\n  createAnthropicAdapter,    // Anthropic\n  createOpenRouterAdapter,   // OpenRouter (300+ models, one API key)\n  createSamplingAdapter,     // MCP host delegation (no API key)\n  createEchoAdapter,         // Testing\n  createRouterAdapter,       // Multi-provider routing\n} from '@riktar/slang'\n\n// OpenRouter — access any model with a single key\nconst openrouter = createOpenRouterAdapter({\n  apiKey: process.env.OPENROUTER_API_KEY,\n  defaultModel: 'openai/gpt-4o',\n})\n\n// Router — different agents, different backends\nconst router = createRouterAdapter({\n  routes: [\n    { pattern: 'claude-*',  adapter: anthropicAdapter },\n    { pattern: 'gpt-*',     adapter: openaiAdapter },\n    { pattern: 'local/*',   adapter: ollamaAdapter },\n  ],\n  fallback: openrouter,  // fallback to OpenRouter\n})\n```\n\n### Functional Tools\n\nMake agent `tools:` declarations real — from the **CLI** or via **API**.\n\n#### CLI: `--tools` flag\n\nCreate a JS/TS file that default-exports an object of tool handlers:\n\n```javascript\n// tools.js\nexport default {\n  async web_search(args) {\n    const res = await fetch(`https://api.search.com?q=${encodeURIComponent(args.query)}`);\n    return await res.text();\n  },\n  async code_exec(args) {\n    // run in a sandbox...\n    return JSON.stringify({ status: \"success\", output: \"...\" });\n  },\n};\n```\n\nThen pass it to `slang run`:\n\n```bash\nslang run research.slang --adapter openrouter --tools tools.js\n```\n\nThe CLI loads the file, logs the available tools, and passes them to the runtime. A ready-to-use example is in [`examples/tools.js`](examples/tools.js).\n\n#### API: `tools` option\n\n```typescript\nconst state = await runFlow(source, {\n  adapter,\n  tools: {\n    web_search: async (args) =\u003e {\n      return await fetchSearchResults(args.query as string)\n    },\n    code_exec: async (args) =\u003e {\n      return runSandbox(args.code as string)\n    },\n  },\n})\n```\n\nOnly tools listed in the agent's `tools: [...]` declaration **and** provided in runtime options (or the `--tools` file) are available. The LLM invokes them via `TOOL_CALL: name(args)` in its response; the runtime executes the handler, feeds the result back, and the LLM continues.\n\n### Checkpoint \u0026 Resume\n\nPersist state after each round. Resume after crash:\n\n```typescript\nimport { runFlow, serializeFlowState, deserializeFlowState } from '@riktar/slang'\n\n// Run with checkpointing\nconst state = await runFlow(source, {\n  adapter,\n  checkpoint: async (snapshot) =\u003e {\n    await writeFile('checkpoint.json', serializeFlowState(snapshot))\n  },\n})\n\n// Resume later\nconst saved = deserializeFlowState(await readFile('checkpoint.json', 'utf8'))\nconst resumed = await runFlow(source, { adapter, resumeFrom: saved })\n```\n\n### Static Analysis\n\n```typescript\nimport { parse, resolveDeps, detectDeadlocks, analyzeFlow } from '@riktar/slang'\n\nconst program = parse(source)\nconst flow = program.flows[0]\nconst graph = resolveDeps(flow)\nconst deadlocks = detectDeadlocks(graph)\nconst diagnostics = analyzeFlow(flow)\n// diagnostics: missing converge, unknown recipients, uncommitted agents, etc.\n```\n\n### Error Handling\n\nSLANG provides structured errors with error codes, human-friendly messages, and source context:\n\n```typescript\nimport { parseWithRecovery, SlangError, SlangErrorCode, formatErrorMessage } from '@riktar/slang'\n\n// Error recovery — collect all errors instead of failing on the first\nconst { program, errors } = parseWithRecovery(source)\n\nfor (const err of errors) {\n  console.log(err.code)     // \"P201\"\n  console.log(err.line)     // 3\n  console.log(err.column)   // 5\n  console.log(err.message)  // 'P201: Expected `{` but got `agent` (at 3:5)\\n   3 | agent Writer\\n       ^'\n  console.log(err.toJSON()) // { code, message, line, column }\n}\n\n// Error codes follow a convention:\n// L1xx — Lexer errors (bad characters, unterminated strings)\n// P2xx — Parser errors (unexpected tokens, missing brackets)\n// R3xx — Resolver errors (unknown agents, deadlocks)\n// E4xx — Runtime errors (no flow, retries exhausted, budget exceeded)\n\n// Format a message from a code\nconst msg = formatErrorMessage(SlangErrorCode.E406, { max: 3, agent: 'Writer', message: 'timeout' })\n```\n\nAll runtime errors (`RuntimeError`) include line/column from the AST, so stack traces point to the exact `.slang` source location.\n\n---\n\n## MCP Server\n\nSLANG ships a built-in [Model Context Protocol](https://modelcontextprotocol.io/) server. No API key needed — it delegates LLM calls back to the host via MCP sampling.\n\n```bash\n# Add to Claude Code\nclaude mcp add slang -- npx --package @riktar/slang slang-mcp\n```\n\n### Available Tools\n\n| Tool | Description |\n|------|-------------|\n| `run_flow` | Execute a SLANG flow and return final state |\n| `parse_flow` | Parse source to AST JSON |\n| `check_flow` | Dependency graph + deadlock detection + diagnostics |\n| `get_zero_setup_prompt` | Get the zero-setup system prompt |\n\n### Claude Desktop Config\n\n```json\n{\n  \"mcpServers\": {\n    \"slang\": {\n      \"command\": \"npx\",\n      \"args\": [\"--package\", \"@riktar/slang\", \"slang-mcp\"]\n    }\n  }\n}\n```\n\n---\n\n## Why not just use an SDK?\n\n| | SDK (LangChain, CrewAI, etc.) | SLANG |\n|---|---|---|\n| Time to first workflow | Hours (install, configure, learn API) | 60 seconds (paste and run) |\n| Who can read it | Developers only | Anyone — including LLMs |\n| Portability | Locked to one language/provider | Same file runs anywhere |\n| Composability | Import code | Import workflows (`import \"research\" as r`) |\n| The LLM can generate it | No (framework boilerplate is opaque) | Yes (text-to-SLANG, like text-to-SQL) |\n| Runtime required | Always | Optional (zero-setup mode) |\n| Documentation | Separate from code | The flow **is** the documentation |\n\nSLANG doesn't replace SDKs any more than SQL replaced Java. It creates a new category: **declarative agent orchestration**. Use SLANG to describe *what* agents should do. Use an SDK when you need fine-grained control over *how*.\n\n---\n\n## Architecture\n\n```\nSource (.slang) → Lexer → Parser → AST → Resolver → DepGraph → Runtime → FlowState\n                                    ↓\n                              Error Recovery → ParseResult { program, errors[] }\n```\n\n| Component | Description |\n|-----------|-------------|\n| **Lexer** | Hand-written tokenizer with line/column tracking |\n| **Parser** | Recursive-descent parser producing a fully typed AST; error recovery mode via `parseWithRecovery()` |\n| **Error System** | Centralized error codes (L/P/R/E), human-friendly messages, source context with caret pointer |\n| **Resolver** | Dependency graphs, deadlock detection, static analysis |\n| **Runtime** | Async round-based scheduler with mailbox, parallel dispatch, checkpoint, tool execution |\n| **Adapters** | Pluggable LLM backends (MCP Sampling, OpenAI, Anthropic, OpenRouter, Router, Echo) |\n| **Playground** | React + Vite web app with editor, dependency graph visualization, AST viewer, and echo runner |\n\n## CLI vs Zero-Setup: feature comparison\n\nSLANG runs in two modes. Not all features are available in both.\n\n| Feature | Zero-Setup (paste in LLM) | CLI / API / MCP |\n|---------|:---:|:---:|\n| Parse \u0026 execute flows | ✅ | ✅ |\n| `stake`, `await`, `commit`, `escalate` | ✅ | ✅ |\n| `role:` agent metadata | ✅ | ✅ |\n| `when` / `if` conditionals | ✅ | ✅ |\n| `converge` / `budget` | ✅ | ✅ |\n| `@out`, `@all`, `@Human` | ✅ | ✅ |\n| `import` composition | ✅ simulated | ✅ |\n| `model:` multi-provider routing | ❌ single LLM | ✅ |\n| `tools:` functional tool execution | ❌ simulated | ✅ (API or CLI `--tools`) |\n| `retry:` with exponential backoff | ❌ | ✅ |\n| `output:` structured output contracts | ✅ best-effort | ✅ enforced |\n| Parallel agent execution | ❌ sequential | ✅ `Promise.all` |\n| Checkpoint \u0026 resume | ❌ | ✅ |\n| Static analysis \u0026 deadlock detection | ❌ | ✅ |\n| Error codes \u0026 recovery mode | ❌ | ✅ |\n| Web playground | ❌ | ✅ (`slang playground`) |\n| Project scaffolding | ❌ | ✅ (`slang init`) |\n| `.env` file support | ❌ | ✅ |\n| OpenRouter / multi-provider | ❌ single LLM | ✅ |\n\n**Zero-setup** is perfect for prototyping, demos, and non-developers. Move to the **CLI/API** when you need real tools, multi-model routing, parallel execution, or production reliability.\n\n## Project Structure\n\n```\nsrc/\n├── index.ts          # Public API exports\n├── lexer.ts          # Tokenizer\n├── parser.ts         # Recursive-descent parser (+ error recovery)\n├── ast.ts            # AST type definitions\n├── errors.ts         # Error codes, messages, and SlangError base class\n├── resolver.ts       # Dependency graph \u0026 deadlock detection\n├── runtime.ts        # Async execution engine\n├── adapter.ts        # LLM adapters (MCP Sampling, OpenAI, Anthropic, OpenRouter, Echo, Router)\n├── cli.ts            # CLI binary (init, run, parse, check, prompt, playground)\n└── mcp.ts            # MCP server binary\nplayground/\n├── src/              # React + Vite web playground (editor, graph, AST, runner)\n├── vite.config.ts    # Vite config with @slang alias\n└── package.json      # Playground dependencies\nexamples/\n├── hello.slang       # Minimal hello world\n├── article.slang     # Writer/Reviewer loop with conditionals\n├── research.slang    # Competitive research with escalation\n├── broadcast.slang   # Parallel broadcast and aggregation\n├── code-review.slang # Code review with tools and structured output\n├── composition.slang # Flow composition with import\n└── tools.js          # Example tool handlers for CLI --tools flag\n```\n\n## Contributing\n\nSee [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Friktar%2Fslang","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Friktar%2Fslang","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Friktar%2Fslang/lists"}