{"id":47234114,"url":"https://github.com/devlikebear/tars","last_synced_at":"2026-04-26T01:02:07.214Z","repository":{"id":343956667,"uuid":"1157132142","full_name":"devlikebear/tars","owner":"devlikebear","description":"Self-hosted AI agent runtime — chat, parallel sub-agents, 3-tier model routing, background watchdog, cron, and multi-channel I/O in a single Go binary","archived":false,"fork":false,"pushed_at":"2026-04-12T09:22:26.000Z","size":21548,"stargazers_count":1,"open_issues_count":4,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-04-12T10:16:27.064Z","etag":null,"topics":["agent","agent-runtime","ai","cli","golang","llm","mcp","model-routing","sub-agents"],"latest_commit_sha":null,"homepage":null,"language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/devlikebear.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":"AGENTS.md","dco":null,"cla":null}},"created_at":"2026-02-13T13:20:26.000Z","updated_at":"2026-04-12T09:22:34.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/devlikebear/tars","commit_stats":null,"previous_names":["devlikebear/tars"],"tags_count":56,"template":false,"template_full_name":null,"purl":"pkg:github/devlikebear/tars","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/devlikebear%2Ftars","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/devlikebear%2Ftars/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/devlikebear%2Ftars/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/devlikebear%2Ftars/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/devlikebear","download_url":"https://codeload.github.com/devlikebear/tars/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/devlikebear%2Ftars/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":32282187,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-25T18:29:39.964Z","status":"ssl_error","status_checked_at":"2026-04-25T18:29:32.149Z","response_time":59,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agent","agent-runtime","ai","cli","golang","llm","mcp","model-routing","sub-agents"],"created_at":"2026-03-13T21:37:21.752Z","updated_at":"2026-04-26T01:02:07.199Z","avatar_url":"https://github.com/devlikebear.png","language":"Go","readme":"# TARS\n\n[![CI](https://github.com/devlikebear/tars/actions/workflows/ci.yml/badge.svg)](https://github.com/devlikebear/tars/actions/workflows/ci.yml)\n[![codecov](https://codecov.io/gh/devlikebear/tars/graph/badge.svg)](https://codecov.io/gh/devlikebear/tars)\n[![Go](https://img.shields.io/github/go-mod/go-version/devlikebear/tars)](go.mod)\n[![Release](https://img.shields.io/github/v/release/devlikebear/tars)](https://github.com/devlikebear/tars/releases)\n\n**TARS is a self-hosted AI agent runtime.**\n\nA single Go binary that runs on your machine and gives you: an interactive chat with durable memory, parallel sub-agents with model tier routing, background watchdog and nightly maintenance, scheduled jobs, and multi-channel I/O (console, Telegram, webhooks) — all configurable via YAML and extensible via skills, plugins, and MCP servers.\n\n## Comparison\n\n| | OpenClaw | Hermes Agent | TARS |\n|---|---|---|---|\n| **Language** | TypeScript | Python | Go (single binary) |\n| **Sub-agents** | ACP + subagent runtimes, push-based completion, Docker sandbox | ThreadPoolExecutor (max 3), ephemeral prompt, credential override | Gateway executor with per-task model tier, allowlist policy, depth control |\n| **Model routing** | Per-agent model override | Per-child provider/model override, MoA (4 frontier models) | 3-tier named bundles (heavy/standard/light) with role→tier config mapping |\n| **Memory** | Session transcripts | Honcho/Holographic plugin hooks | Durable KB + semantic search + experience extraction + nightly compilation |\n| **Background** | None | None | Pulse watchdog (1-min) + Reflection nightly batch |\n| **Scheduling** | None | None | Session-bound cron jobs with audit logs |\n| **Channels** | CLI | CLI + Gateway API | Console + Telegram + webhooks |\n| **Context mgmt** | Per-session | ContextCompressor (50% threshold, protect-last-N) | Structured compaction with identifier preservation + light-tier LLM summary |\n| **Extensibility** | Built-in tools | Toolsets (terminal, file, web, delegation) | Skills + Plugins + MCP servers + Skill Hub registry |\n\n## Key Features\n\n### Chat + Memory\n\nThe primary interface. Browser-based console at `http://127.0.0.1:43180/console`.\n\n- Multi-session chat with full LLM tool-calling loops\n- Durable memory: `MEMORY.md`, experiences, daily logs, semantic embeddings\n- Obsidian-style knowledge base: wiki notes with graph metadata and KB CRUD tools\n- Structured transcript compaction preserving identifiers and recent context\n- System prompt customization via `USER.md`, `IDENTITY.md`, `AGENTS.md`, `TOOLS.md`\n\n### Sub-Agent Orchestration\n\nSpawn read-only agents for research, planning, and specialized tasks:\n\n```yaml\n# workspace/agents/explorer/AGENT.md\n---\nname: explorer\ntier: light\ntools_allow: [read_file, list_dir, glob, memory_search]\n---\n```\n\nUse `subagents_run` when tasks are independent and can fan out in parallel:\n\n```json\n{\"tasks\": [\n  {\"prompt\": \"find all API endpoints\", \"tier\": \"light\"},\n  {\"prompt\": \"design the migration plan\", \"tier\": \"heavy\"}\n]}\n```\n\nUse `subagents_orchestrate` when later tasks depend on earlier subagent results. It executes staged `parallel` and `sequential` steps and supports placeholders such as `{{task.backend.summary}}`.\n\nUse `subagents_plan` before `subagents_orchestrate` when the main agent needs the heavy-tier planner model to decide which tasks should run in parallel versus sequence. The planner returns a validated staged flow that can be executed directly.\n\nTier resolution priority: task `tier` \u003e agent YAML `tier` \u003e config default.\n\n### 3-Tier Model Routing\n\nRoute workloads to different models for cost and quality optimization:\n\n| Tier | Purpose | Example |\n|------|---------|---------|\n| **heavy** | Planning, complex reasoning, architecture | claude-opus-4-6, gpt-5.4 |\n| **standard** | General chat, agent loops, tool calling | claude-sonnet-4-6, gpt-5.4 |\n| **light** | Summarization, classification, pulse, reflection | claude-haiku-4-5, gpt-4o-mini |\n\n```yaml\n# tars.config.yaml\nllm:\n  providers:\n    default:\n      kind: anthropic\n      auth_mode: api-key\n      api_key: ${ANTHROPIC_API_KEY}\n  tiers:\n    heavy:\n      provider: default\n      model: claude-opus-4-6\n    standard:\n      provider: default\n      model: claude-sonnet-4-6\n    light:\n      provider: default\n      model: claude-haiku-4-5\n  default_tier: standard\n  role_defaults:\n    pulse_decider: light\n    gateway_planner: heavy\n```\n\nEach system role (chat, pulse, reflection, compaction, gateway agents) maps to a tier. Background surfaces default to `light`, keeping costs low. `llm_role_gateway_planner` is now exercised by `subagents_plan`, and TARS logs the resolved `role`, `tier`, `provider`, `model`, and `source` for chat and gateway LLM calls so tier selection is traceable in runtime logs.\n\n### Background Surfaces\n\nTwo isolated surfaces run independently from user chat:\n\n- **Pulse** — 1-minute watchdog scanning cron failures, stuck runs, disk pressure, Telegram delivery health, and reflection status. LLM classifier picks `ignore` / `notify` / `autofix`. Autofixes are whitelisted in config.\n- **Reflection** — Nightly batch (default 02:00–05:00) running memory cleanup (experience extraction + knowledge-base compilation) and empty-session pruning.\n\nBoth use the `light` tier by default and have no access to user-facing tools (enforced at compile time via `RegistryScope`).\n\n### Scheduling\n\nNative cron with session binding:\n\n- Cron expressions and one-shot `@at` schedules\n- Session-bound jobs inherit the session's tool policy, work dirs, and prompt override\n- Audit logs: `artifacts/\u003csession_id\u003e/cronjob-log.jsonl`\n- Console Cron tab for per-session job management\n\n### Channels\n\nMulti-channel I/O beyond the web console:\n\n- **Telegram** — Bidirectional messaging with pairing-based access control\n- **Webhooks** — Inbound HTTP triggers for external integrations\n- **Local** — Direct API calls for scripts and automation\n\n### Extensibility\n\nTARS favors **on-demand extension** over always-resident tool registrations. Domain-specific capabilities are shipped as skills (plus optional companion CLIs) from the [Skill Hub](https://github.com/devlikebear/tars-skills) rather than compiled into the TARS binary — this keeps the chat system prompt small no matter how many capabilities a user installs.\n\n- **[Skill Hub](https://github.com/devlikebear/tars-skills)** — Public registry of skills, plugins, and MCP servers. Install with `tars skill install \u003cname\u003e`, `tars plugin install \u003cname\u003e`, `tars mcp install \u003cname\u003e`. The hub is the first place to look before writing a new capability, and the only place to publish one.\n- **Skills** — Markdown instruction files (YAML frontmatter + body) with optional companion scripts. A skill's frontmatter can set `recommended_tools: [bash]` and instruct the LLM to invoke a co-installed CLI (Python/TypeScript/shell); this keeps the CLI's interface out of the system prompt until the skill itself is picked. See `daily-briefing` in the hub for the canonical pattern.\n- **Plugins** — Bundle skills + MCP servers with manifest metadata and runtime gating.\n- **MCP** — Local stdio and remote HTTP/WebSocket servers with bearer or OAuth auth. Use for third-party integrations that cannot be expressed as a CLI the bash tool can call.\n- **Browser** — Playwright-based automation for web interaction (shipped as a hub plugin).\n\n**When to build a hub skill vs. a core feature**: if the capability is domain-specific (one site's logs, one vendor's API, one workflow), it belongs in `tars-skills` as a skill + CLI. Builtin tools inside this repo are reserved for universal surfaces (file ops, memory, gateway, channels) that every session uses.\n\n## Install\n\n**Homebrew:**\n\n```bash\nbrew tap devlikebear/tap\nbrew install devlikebear/tap/tars\n```\n\n**Curl:**\n\n```bash\ncurl -fsSL https://raw.githubusercontent.com/devlikebear/tars/main/install.sh | sh\n```\n\n## Quick Start\n\n```bash\n# Initialize workspace and config\ntars init\n\n# Set your provider credentials\nexport ANTHROPIC_API_KEY=\"your-key\"\n# Or: export OPENAI_API_KEY=\"your-key\"\n# Then edit ~/.tars/config/config.yaml under llm.providers / llm.tiers if needed\n\n# Validate setup\ntars doctor --fix\n\n# Start the server\ntars serve\n\n# Open the web console\ntars\n```\n\nOpen `http://127.0.0.1:43180/console` and start chatting.\n\n## Console Pages\n\n| Page | Path | Purpose |\n|------|------|---------|\n| Chat | `/console` | Interactive agent chat with tool calling |\n| Memory | `/console/memory` | Edit durable memory, test semantic search, browse KB |\n| System Prompt | `/console/sysprompt` | Edit USER.md, IDENTITY.md, AGENTS.md, TOOLS.md |\n| Ops | `/console/ops` | System health and cleanup operations |\n| Pulse | `/console/pulse` | Watchdog status and run-now trigger |\n| Reflection | `/console/reflection` | Nightly batch status and run-now trigger |\n| Extensions | `/console/extensions` | Skills, plugins, MCP servers |\n| Config | `/console/config` | Workspace configuration |\n\n## Requirements\n\n- Go 1.25.6+ (for building from source)\n- LLM provider credentials (Anthropic, OpenAI, Gemini, or Claude Code CLI)\n- Optional: Gemini API key for semantic memory embeddings\n- Optional: Node.js for Playwright browser automation\n\n## Build\n\n```bash\nmake build-bins\nbin/tars version\n```\n\nFor development with hot-reload:\n\n```bash\nmake dev-console    # Vite (5173) + Go API (43180), open http://127.0.0.1:43180/console\n```\n\n## Documentation\n\n- [Getting Started](GETTING_STARTED.md)\n- [Plugin and MCP Packaging Guide](docs/plugins.md)\n- [Contributing](CONTRIBUTING.md)\n- [Changelog](CHANGELOG.md)\n\n## Status\n\nPre-1.0.0 — Module path: `github.com/devlikebear/tars`\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdevlikebear%2Ftars","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdevlikebear%2Ftars","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdevlikebear%2Ftars/lists"}