{"id":46793531,"url":"https://github.com/vaayne/anna","last_synced_at":"2026-04-29T09:09:31.799Z","repository":{"id":342325774,"uuid":"1173444191","full_name":"vaayne/anna","owner":"vaayne","description":"AI assistant that never forgets. LCM memory, multi-channel, built-in scheduler.","archived":false,"fork":false,"pushed_at":"2026-04-15T04:18:51.000Z","size":19453,"stargazers_count":10,"open_issues_count":3,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-04-15T05:25:36.584Z","etag":null,"topics":["ai-assistant","anthropic","cli","context-management","feishu-bot","golang","llm","openai","personal-assistant","qqbot","self-hosted","sqlite","telegram-bot"],"latest_commit_sha":null,"homepage":"https://anna.vaayne.com","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/vaayne.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":"AGENTS.md","dco":null,"cla":null}},"created_at":"2026-03-05T11:22:39.000Z","updated_at":"2026-04-15T04:18:55.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/vaayne/anna","commit_stats":null,"previous_names":["vaayne/anna"],"tags_count":19,"template":false,"template_full_name":null,"purl":"pkg:github/vaayne/anna","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vaayne%2Fanna","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vaayne%2Fanna/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vaayne%2Fanna/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vaayne%2Fanna/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/vaayne","download_url":"https://codeload.github.com/vaayne/anna/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vaayne%2Fanna/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":32035276,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-20T00:18:06.643Z","status":"online","status_checked_at":"2026-04-20T02:00:06.527Z","response_time":94,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai-assistant","anthropic","cli","context-management","feishu-bot","golang","llm","openai","personal-assistant","qqbot","self-hosted","sqlite","telegram-bot"],"created_at":"2026-03-10T03:05:08.656Z","updated_at":"2026-04-29T09:09:31.791Z","avatar_url":"https://github.com/vaayne.png","language":"Go","readme":"\u003cp align=\"center\"\u003e\n  \u003cimg src=\"avatar.png\" width=\"200\" alt=\"anna\" /\u003e\n\u003c/p\u003e\n\n# anna\n\n**Your AI assistant that never forgets.**\n\nAnna is a self-hosted AI assistant that runs on your machine and talks to you through your terminal, Telegram, QQ, Feishu, or WeChat. She keeps every conversation in a local SQLite database, compresses old context automatically so the LLM never hits its limit, and can recover the original detail whenever she needs it.\n\nShe supports multiple agents running simultaneously, each with their own personality, model, and provider. Multiple users are handled automatically -- each person gets isolated per-agent memory that persists across sessions.\n\nShe also schedules tasks, monitors files, and sends you notifications across channels without waiting for you to ask.\n\n## Why anna\n\nMost AI assistants lose your context. You hit the token limit, the old messages get truncated, and the assistant forgets what you were working on. Start a new chat, re-explain everything, repeat.\n\nAnna solves this with LCM (Lossless Context Management). As conversations grow, older messages get compressed into summaries organized in a DAG. Summaries get condensed into higher-level summaries. But the originals stay in the database. The agent has tools to search its history and drill back into any summary to pull up the full text. You can talk to Anna for weeks and she'll still know what you said on day one.\n\nBeyond memory, there are a few other things worth calling out.\n\nAnna meets you where you are. Terminal, Telegram, QQ, Feishu, WeChat, all sharing the same session pool and memory. Chat from your laptop in the morning, pick it up on Telegram from your phone in the evening.\n\nShe does things on her own. Tell her \"remind me every morning at 9am to check my email\" and she will. Built-in scheduler, heartbeat file monitoring, push notifications across whatever channels you have connected.\n\nRun multiple agents at once. A coding assistant, a writing partner, a daily planner -- each with its own model, provider, system prompt, and isolated workspace. Switch between them with `/agent` in Telegram or `--agent` on the CLI.\n\nMultiple users out of the box. Users are auto-created from platform identity (Telegram user ID, QQ ID, etc). Each user gets per-agent memory stored in the database, so Anna remembers different things about different people.\n\nAnd the whole thing is a Go CLI with a SQLite database. Your machine, your API keys, nothing leaves your network.\n\nExtensibility uses a unified subprocess plugin model: all built-in tools and channels are plugins that can be replaced or extended without recompiling.\n\n## How it works\n\n```\nUsers (Telegram / QQ / Feishu / WeChat / Terminal)\n |\n |  /agent to switch agents\n v\nanna (single binary, your machine)\n |\n |- Agents (multiple, each with own model/provider/personality)\n |   |- Workspace (~/.anna/workspaces/{agent-id}/.agents/skills/)\n |   |- 3-layer system prompt (SYSTEM.md -\u003e SOUL.md -\u003e user memory)\n |   '- LCM Memory (DAG-based context compression)\n |\n |- Admin Panel (web UI for all configuration)\n |- Scheduler (jobs, reminders, heartbeat)\n |- Skills (extensible via skills.sh)\n '- Notifications (pushes results back to you)\n |\n v\nLLM Provider (Anthropic / OpenAI / any compatible API)\n```\n\n## Memory: how LCM works\n\nThe memory system stores every message in SQLite and organizes summaries into a directed acyclic graph. When the conversation gets long, older messages are grouped and summarized into leaf nodes. Groups of leaf nodes get condensed into higher-level nodes. This happens automatically.\n\nThe agent carries a unified `memory` tool with four actions:\n- `grep` -- search messages and summaries by keyword\n- `describe` -- inspect a summary node's metadata and lineage\n- `expand` -- drill into a summary to retrieve the source content\n- `user_memory_update` -- update persistent per-user notes across sessions (write-only, injected into system prompt automatically)\n\nWhen the context window fills up, Anna isn't working with truncated history. She's working with compressed summaries and can pull up specifics on demand. A conversation can be a thousand messages long and she'll still find what she needs.\n\n## Multi-agent and multi-user\n\nAnna supports running multiple agents simultaneously. Each agent has:\n\n- Its own model and provider configuration\n- An isolated workspace at `~/.anna/workspaces/{agent-id}/.agents/skills/`\n- A system prompt defined in the DB (`settings_agents.system_prompt`), overridable by placing a `SOUL.md` in the workspace\n- A 3-layer system prompt: basic system prompt (overridable by `SYSTEM.md`), then agent soul (overridable by `SOUL.md`), then per-user memory from the database\n\nUsers are auto-created from platform identity. Each user gets per-agent memory stored in the `ctx_agent_memory` table, which is injected into the system prompt and updated via the `user_memory_update` action on the `memory` tool. Anna remembers different things about different people, per agent.\n\nIn Telegram, use `/agent` to switch between agents. In DMs, your default agent is remembered. In groups, the agent is set per-group. On the CLI, use `anna --agent \u003cname\u003e`.\n\n## Channels\n\nFive channels, all sharing the same memory:\n\n| Channel | Connection | Streaming | Groups |\n|---------|-----------|-----------|--------|\n| Terminal | Local TUI (Bubble Tea) | Token-by-token | n/a |\n| Telegram | Long polling, no public IP | Draft API | Mention / always / disabled |\n| QQ | WebSocket | Native Stream API | Mention support |\n| Feishu | WebSocket, no public IP | Edit-in-place | Mention support |\n| WeChat | Long polling (iLink Bot) | Non-streaming | DM only |\n\nYou can run multiple bot instances for the same platform. Leave a channel unbound to let users switch agents with `/agent`, or bind a channel instance to a dedicated agent so that bot always routes to that agent.\n\nEvery channel supports `/new`, `/compact`, `/abort`, `/model`, `/agent`, `/whoami`, model switching, access control, and image input. Channel messages are processed one-at-a-time per session, so later messages wait for the current turn to finish or be aborted.\n\nLark workspace automation is no longer built in as `feishu_*` tools. Instead, anna now models `mise`, `tap-web`, `gh`, and `lark-cli` as plugin-managed CLI integrations. `tap-web` and `lark` ship as generated builtin system skills, while `gh` and `lark-cli` also own their OAuth config and injected runtime env. Their binaries resolve directly from Anna-managed `PATH` entries rooted at `$ANNA_HOME/bin`. Use the builtin `lark` skill with `lark-cli` for calendar, docs, tasks, sheets, drive, and other workspace actions.\n\n## Scheduler\n\nYou don't write cron expressions by hand. You just tell Anna what you need.\n\n\"Check the weather in Beijing every morning at 8am\" creates a recurring job. \"Remind me at 2:30 PM to call the dentist\" creates a one-shot timer that cleans up after it fires. Jobs persist across restarts.\n\nThere's also a heartbeat mode. Anna polls a markdown file on an interval, uses a cheap fast model to decide if anything needs attention, and only spins up the main model when there's real work. Results get pushed to whatever channels you have connected.\n\n## Identity\n\nAnna's identity system is DB-backed. No more markdown files to manage by hand.\n\n- **Agent soul**: stored in `settings_agents.system_prompt`, overridable by placing a `SOUL.md` in the agent's workspace (`~/.anna/workspaces/{agent-id}/`)\n- **System prompt**: base instructions overridable by `SYSTEM.md` in the workspace\n- **User memory**: per-user per-agent notes stored in the `ctx_agent_memory` table, injected into the system prompt automatically\n\nThe 3-layer system prompt builds up as: base system prompt, then agent soul, then user memory. Anna updates user memory over time as she learns your name, timezone, and preferences.\n\n## Providers and models\n\nWorks with Anthropic, OpenAI, and any OpenAI-compatible API (Perplexity, Together.ai, local models via Ollama, etc). Provider configuration is managed through the admin panel.\n\nEnvironment variables `ANTHROPIC_API_KEY` and `OPENAI_API_KEY` still work as fallbacks.\n\nThree model tiers:\n\n- `model_strong` for hard problems\n- `model` for everyday use (the default)\n- `model_fast` for cheap checks and gate decisions\n\nThe heartbeat system uses the fast model to decide \"skip or run\" and only calls the default model when there's actual work. Keeps costs down without you having to think about it.\n\n## Skills\n\nAnna connects to the [skills.sh](https://skills.sh) ecosystem:\n\n```bash\nanna skills search \"web scraping\"\nanna skills install owner/repo@skill-name\nanna skills list\nanna skills remove skill-name\n```\n\nSearch, install, and manage skills from the CLI or mid-conversation. Each agent has its own skills directory at `~/.anna/workspaces/{agent-id}/.agents/skills/`.\n\n## Security and Sandboxing\n\nAnna uses Docker for local agent code execution on all platforms (Linux, macOS, Windows). Docker is required; Anna fails closed when the Docker daemon is unavailable. The `bash`, `read`, `write`, and `edit` tools run inside a Docker container that isolates each session:\n\n- Each agent session gets its own ephemeral container\n- File modifications don't affect the underlying source workspace\n- Path traversal outside the sandbox is blocked\n- Network access is disabled by default\n\nPer-agent network policy can be configured through the admin panel:\n\n| Mode | Description |\n|------|-------------|\n| `disabled` | No outbound network (default) |\n| `allow_all` | Unrestricted outbound access |\n\nRunner startup fails closed when Docker is unavailable. Remote MCP servers are a separate trust boundary: local MCP stdio transport is runtime-mediated via `Session.StartProcess`, while remote MCP HTTP/SSE transport is not currently covered by the local sandbox boundary.\n\nSee [Architecture](/docs/core/architecture) for the session interface, execution-time mediation details, and the explicit MCP transport exception.\n\n## Quick start\n\n### Install\n\n```bash\ngo install github.com/vaayne/anna@latest\n```\n\nOr grab a binary from [Releases](https://github.com/vaayne/anna/releases), or self-update with `anna upgrade`.\n\n### Set up\n\n```bash\nanna --open\n```\n\nThis opens a web admin panel in your browser where you can configure everything: providers, API keys, agents, channels (Telegram, QQ, Feishu, WeChat), users, scheduled jobs, and settings. All configuration is stored in `~/.anna/anna.db`. There are no YAML config files.\n\n### Use\n\n```bash\nanna                         # Start daemon (bots + scheduler)\nanna --port 8080             # Start daemon with admin panel\nanna --host 0.0.0.0 --port 8080  # Bind admin panel to all interfaces\n```\n\n`anna` (bare command) starts all your configured channels and the scheduler. Add `--port` to expose the admin panel alongside the daemon for runtime configuration. `HOST` and `PORT` environment variables are also supported.\n\n## CLI reference\n\n```bash\nanna --open                        # Open web admin panel to configure anna\nanna                               # Start daemon (bots + scheduler)\nanna --port \u003cport\u003e                 # Start daemon with admin panel\nanna --host \u003chost\u003e --port \u003cport\u003e   # Bind admin panel to a specific host/interface\nanna models list           # List available models\nanna models set \u003cp/m\u003e      # Switch model (e.g. openai/gpt-4o)\nanna models search \u003cq\u003e     # Search models\nanna skills search \u003cq\u003e     # Search skills.sh\nanna skills install \u003cs\u003e    # Install a skill\nanna plugin list           # List all plugins with status\nanna plugin add \u003cpath\u003e     # Install a plugin\nanna plugin remove \u003cname\u003e  # Remove an installed plugin\nanna version               # Print version\nanna upgrade               # Self-update to latest release\n```\n\n## Documentation\n\n| Document | Description |\n|----------|------------|\n| [Configuration](docs/content/docs/getting-started/configuration.md) | Full config reference, admin panel, defaults |\n| [Deployment](docs/content/docs/getting-started/deployment.md) | Binary install, Docker, systemd, compose |\n| [Architecture](docs/content/docs/core/architecture.md) | System design, packages, providers, tools |\n| [Models](docs/content/docs/core/models.md) | Tiers, CLI commands, provider setup |\n| [Memory System](docs/content/docs/core/memory-system.md) | LCM deep dive, DAG structure, retrieval tools |\n| [Session Compaction](docs/content/docs/core/session-compaction.md) | How context compression works |\n| [Telegram](docs/content/docs/channels/telegram.md) | Bot setup, streaming, groups, access control |\n| [QQ Bot](docs/content/docs/channels/qq.md) | Bot setup, webhook, streaming |\n| [Feishu Bot](docs/content/docs/channels/feishu.md) | Bot setup, WebSocket, streaming |\n| [WeChat Bot](docs/content/docs/channels/weixin.md) | iLink Bot setup, QR login, DM |\n| [Scheduler System](docs/content/docs/features/scheduler-system.md) | Scheduler system, heartbeat, persistence |\n| [Plugin System](docs/content/docs/features/plugin-system.md) | Unified subprocess plugin model for tools and channels |\n| [Notification System](docs/content/docs/features/notification-system.md) | Dispatcher, backends, routing |\n\n## Development\n\n```bash\nmise run build             # Build binary -\u003e bin/anna (runs pre-build deps sync)\nmise run deps:sync         # Sync embedded third-party tools + generated system skills\nmise run test              # Run tests\nmise run format            # golangci-lint run --fix\nmise run release:check     # Validate GoReleaser config\nmise run release:snapshot  # Build a host-only snapshot artifact\n```\n\nIf you bypass `mise`, run `go run ./cmd/builddeps sync --skills --tools` before `go build` so embedded binaries and generated system skills are up to date.\n\n## License\n\nMIT\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fvaayne%2Fanna","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fvaayne%2Fanna","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fvaayne%2Fanna/lists"}