{"id":47617977,"url":"https://github.com/xuiltul/animaworks","last_synced_at":"2026-04-01T21:44:35.596Z","repository":{"id":339344375,"uuid":"1160721237","full_name":"xuiltul/animaworks","owner":"xuiltul","description":"Organization-as-Code for autonomous AI agents. Brain-inspired memory that grows, consolidates, and forgets. Multi-model (Claude/Codex/Gemini/Cursor/Ollama).","archived":false,"fork":false,"pushed_at":"2026-03-28T17:28:01.000Z","size":46661,"stargazers_count":219,"open_issues_count":8,"forks_count":29,"subscribers_count":3,"default_branch":"main","last_synced_at":"2026-03-28T19:23:09.891Z","etag":null,"topics":["agent-framework","ai-agents","autonomous-agents","brain-inspired","claude","forgetting","llm","memory","multi-agent","multi-model","ollama","organization-as-code","python","rag"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/xuiltul.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"docs/security.ja.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-02-18T09:43:25.000Z","updated_at":"2026-03-28T17:28:04.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/xuiltul/animaworks","commit_stats":null,"previous_names":["xuiltul/animaworks"],"tags_count":9,"template":false,"template_full_name":null,"purl":"pkg:github/xuiltul/animaworks","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xuiltul%2Fanimaworks","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xuiltul%2Fanimaworks/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xuiltul%2Fanimaworks/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xuiltul%2Fanimaworks/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/xuiltul","download_url":"https://codeload.github.com/xuiltul/animaworks/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xuiltul%2Fanimaworks/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31292508,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-01T21:15:39.731Z","status":"ssl_error","status_checked_at":"2026-04-01T21:15:34.046Z","response_time":53,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agent-framework","ai-agents","autonomous-agents","brain-inspired","claude","forgetting","llm","memory","multi-agent","multi-model","ollama","organization-as-code","python","rag"],"created_at":"2026-04-01T21:44:33.813Z","updated_at":"2026-04-01T21:44:35.585Z","avatar_url":"https://github.com/xuiltul.png","language":"Python","readme":"# AnimaWorks — Organization-as-Code\n\n**No one can do anything alone. So I built an organization.**\n\nA framework that treats AI agents not as “tools” but as people who work autonomously. Each Anima has a name, personality, memory, and schedule; they coordinate by message, decide for themselves, and move as a team. Talk to the leader — the rest runs on its own.\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"docs/images/workspace-dashboard.gif\" alt=\"AnimaWorks Workspace — real-time org tree with live activity feeds\" width=\"720\"\u003e\n  \u003cbr\u003e\u003cem\u003eWorkspace dashboard: each Anima’s role, status, and recent actions are visible in real time.\u003c/em\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"docs/images/workspace-demo.gif\" alt=\"AnimaWorks 3D Workspace — agents collaborating autonomously\" width=\"720\"\u003e\n  \u003cbr\u003e\u003cem\u003e3D office: Animas sit at desks, walk around, and exchange messages on their own.\u003c/em\u003e\n\u003c/p\u003e\n\n**[日本語版 README](README_ja.md)** | **[简体中文 README](README_zh.md)** | **[한국어 README](README_ko.md)**\n\n---\n\n## How It Compares\n\n|  | AnimaWorks | CrewAI | LangGraph | OpenClaw | OpenAI Agents |\n|--|-----------|--------|-----------|----------|---------------|\n| **Design philosophy** | Organization of autonomous agents | Role-based teams | Graph workflows | Personal assistant | Lightweight SDK |\n| **Memory** | Neuroscience-inspired: RAG (Chroma + graph), consolidation, three-stage forgetting, six-channel automatic priming (with trust tags) | Cognitive Memory (manual forget) | Checkpoints + cross-thread store | SuperMemory knowledge graph | Session-scoped only |\n| **Autonomy** | Heartbeat (observe → plan → reflect) + Cron + TaskExec — runs 24/7 | Human-triggered | Human-triggered | Cron + heartbeat | Human-triggered |\n| **Org structure** | Supervisor → subordinate hierarchy, delegation, audit, dashboard | Flat roles in a crew | — | Single agent | Handoffs only |\n| **Process model** | One isolated OS process per agent, IPC, auto-restart | Shared process | Shared process | Single process | Shared process |\n| **Multi-model** | Six engines: Claude SDK / Codex / Cursor Agent / Gemini CLI / LiteLLM / Assisted (Anthropic SDK falls back inside Mode A when Agent SDK is not installed) | LiteLLM | LangChain models | OpenAI-compatible | OpenAI-centric |\n\n\u003e AnimaWorks is not a task runner. It is an organization that thinks, remembers, forgets, and grows. It can support operations as a team and be run like a company. I operate it as a real AI company.\n\n---\n\n## :rocket: Try It Now — Docker Demo\n\nUp and running in about 60 seconds. You only need an API key and Docker.\n\n```bash\ngit clone https://github.com/xuiltul/animaworks.git\ncd animaworks/demo\ncp .env.example .env          # paste your ANTHROPIC_API_KEY\ndocker compose up              # open http://localhost:18500\n```\n\nA three-person team (manager + engineer + coordinator) starts immediately, with three days of activity history. [Demo details →](demo/README.md)\n\n\u003e Switch language / style: `PRESET=ja-anime docker compose up` — [full preset list](demo/README.md#presets)\n\n---\n\n## Quick Start\n\nmacOS / Linux / WSL:\n\n```bash\ncurl -sSL https://raw.githubusercontent.com/xuiltul/animaworks/main/scripts/setup.sh | bash\ncd animaworks\nuv run animaworks start     # start server — setup wizard opens on first run\n```\n\nWindows (PowerShell):\n\n```powershell\ngit clone https://github.com/xuiltul/animaworks.git\ncd animaworks\nuv sync\nuv run animaworks start\n```\n\nTo use OpenAI Codex without an API key, run `codex login` before the first launch.\n\nOpen **http://localhost:18500/** — the setup wizard walks you through:\n\n1. **Language** — choose the UI display language\n2. **User info** — create the owner account\n3. **Provider auth** — enter API keys or choose Codex Login for OpenAI\n4. **First Anima** — name your first agent\n\nYou do not need to hand-edit `.env`. The wizard saves settings to `config.json` automatically.\n\nThe setup script installs [uv](https://docs.astral.sh/uv/), clones the repository, and downloads Python 3.12+ with all dependencies. **macOS, Linux, and WSL** work without a pre-installed Python. On **Windows**, use the PowerShell steps above.\n\n\u003e **Other LLMs:** Claude, GPT, Gemini, local models, and more are supported. Enter API keys in the setup wizard, or use **Codex Login** for OpenAI/Codex. You can change this later under **Settings** on the dashboard. See [API Key Reference](#api-key-reference).\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eAlternative: inspect the script before running\u003c/strong\u003e\u003c/summary\u003e\n\nIf you prefer not to pipe `curl` straight into `bash`, review the script first:\n\n```bash\ncurl -sSL https://raw.githubusercontent.com/xuiltul/animaworks/main/scripts/setup.sh -o setup.sh\ncat setup.sh            # review the script\nbash setup.sh           # run after review\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eAlternative: manual install with uv (step by step)\u003c/strong\u003e\u003c/summary\u003e\n\n```bash\n# Install uv (skip if already installed)\ncurl -LsSf https://astral.sh/uv/install.sh | sh\nexport PATH=\"$HOME/.local/bin:$PATH\"\n\n# Clone and install\ngit clone https://github.com/xuiltul/animaworks.git \u0026\u0026 cd animaworks\nuv sync                 # downloads Python 3.12+ and all dependencies\n\n# Start\nuv run animaworks start\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eAlternative: manual install with pip\u003c/strong\u003e\u003c/summary\u003e\n\n\u003e **macOS users:** System Python (`/usr/bin/python3`) on macOS Sonoma and earlier is 3.9, which does not meet AnimaWorks (3.12+). Install with [Homebrew](https://brew.sh/) (`brew install python@3.13`) or use the uv method above (uv manages Python for you).\n\nRequires Python 3.12+ on your system.\n\n```bash\ngit clone https://github.com/xuiltul/animaworks.git \u0026\u0026 cd animaworks\npython3 -m venv .venv \u0026\u0026 source .venv/bin/activate\npython3 --version       # verify 3.12+\npip install --upgrade pip \u0026\u0026 pip install -e .\nanimaworks start\n```\n\n\u003c/details\u003e\n\n---\n\n## What You Can Do\n\n### Dashboard\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"docs/images/dashboard.png\" alt=\"AnimaWorks Dashboard — org chart with 19 Animas\" width=\"720\"\u003e\n  \u003cbr\u003e\u003cem\u003eDashboard: four hierarchy levels, 19 Animas running, with real-time status.\u003c/em\u003e\n\u003c/p\u003e\n\nUse the left sidebar to move between main screens (hash router `#/…`).\n\n- **Chat** — Real-time conversation with any Anima. Streaming responses (SSE), image attachments, multi-thread history, full archive. **Meeting mode** gathers multiple Animas in one room with a designated facilitator (up to five participants, dedicated API)\n- **Voice chat** — Voice in the browser only (push-to-talk or hands-free). WebSocket-based. VOICEVOX / SBV2 / ElevenLabs\n- **Board** — Slack-style shared channels where Animas discuss and coordinate\n- **Dashboard (home)** — Organization overview and status\n- **Activity** — Real-time feed for the whole organization\n- **Setup** — First run uses the wizard at `http://HOST/setup/`. After setup, `/setup` in the browser redirects to the top level, but you can open the same items (language, auth, etc.) from `#/setup` inside the dashboard\n- **Users** — Owner and user profile management\n- **Anima management** — Enable/disable, model, and metadata per Anima\n- **Process monitoring** — Child process health\n- **Server** — Server-side state and settings\n- **Memory** — Browse each Anima’s episodes, knowledge, procedures, and more\n- **Logs** — Log viewer\n- **Assets** — Character images, 3D, and other assets\n- **Activity report** — Cross-org auditing and daily LLM-generated narratives from activity data (cached)\n- **Prompt settings** — Tune prompts around tool execution\n- **AI brainstorm** — LLM sessions with multiple viewpoint presets (realist, challenger, etc.)\n- **Team builder / team edit** — Build and adjust multi-Anima role layouts from industry- and goal-oriented presets\n- **Settings** — Server, authentication, locale, and more\n- **Workspace** — 3D office in a separate tab at `/workspace/` (chat, Board, org tree, etc.); static app split from the main dashboard\n- **Multilingual** — **First-run setup wizard** UI copy in 17 languages. **Main dashboard** ships `ja` / `en` / `ko` JSON translations (missing keys fall back to Japanese). Anima-facing templates deploy with Japanese and English as the base\n\n### Build an organization and delegate\n\nTell the leader “I need someone like this” — they infer role, personality, and hierarchy and create new members. No config files or CLI required. The org grows through conversation alone.\n\nOnce the team is ready, it keeps moving without a human in the loop:\n\n- **Heartbeat** — Periodically reviews the situation and decides what to do next\n- **Cron jobs** — Daily reports, weekly digests, monitoring — per-Anima schedules\n- **Task delegation** — Managers assign work to subordinates, track progress, and receive reports\n- **Parallel task execution** — Submit many tasks at once; dependencies are resolved and independent tasks run in parallel\n- **Night consolidation** — Daytime episodic memory is distilled into knowledge while “asleep”\n- **Team coordination** — Shared channels and DMs keep everyone aligned automatically\n\n### Memory system\n\nTypical AI agents only remember what fits in the context window. AnimaWorks Animas keep persistent memory and search it when needed — like taking a book from a shelf.\n\n- **Automatic priming (Priming)** — When a message arrives, six parallel searches run: sender profile, recent activity, **RAG vector search** for related knowledge and episodes, skills, pending tasks, and more. Recall happens without explicit instructions\n- **Consolidation** — Every night, daytime episodes become knowledge — analogous to sleep-dependent memory consolidation in neuroscience. Resolved issues automatically become procedures\n- **Forgetting** — Little-used memories fade in three stages: mark → merge → archive. Important procedures and skills stay protected. Like the human brain, forgetting matters\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"docs/images/chat-memory.png\" alt=\"AnimaWorks Chat — multi-thread conversations with multiple Animas\" width=\"720\"\u003e\n  \u003cbr\u003e\u003cem\u003eChat: a manager reviews a code change while an engineer reports progress.\u003c/em\u003e\n\u003c/p\u003e\n\n### Multi-model support\n\nWorks with many LLMs. Each Anima can use a different model.\n\n| Mode | Engine | Targets | Tools |\n|------|--------|---------|--------|\n| S (SDK) | Claude Agent SDK | Claude models (recommended) | Claude Code built-ins (Read/Write/Edit/Bash/Grep/Glob, etc.) + **stdio MCP** (`mcp__aw__*`) for AnimaWorks internal tools; external integrations via `skill` → `animaworks-tool` |\n| C (Codex) | Codex CLI (SDK wrapper) | OpenAI Codex CLI models | Codex sandbox + **AnimaWorks MCP** (`core/mcp/server.py`) for internal tools |\n| D (Cursor) | Cursor Agent CLI | `cursor/*` models | MCP-integrated agent loop |\n| G (Gemini CLI) | Gemini CLI | `gemini/*` models | stream-json parsing, tool loop |\n| A (Autonomous) | LiteLLM + tool_use | GPT, Gemini, Mistral, Bedrock, Vertex, xAI, etc. | CC-style (Read/Write/Edit/Bash/Grep/Glob, **WebSearch/WebFetch**) + memory, messaging, tasks (**submit_tasks**, etc.), **todo_write**, **skill**, and more (varies with notifications and supervisor tools) |\n| B (Basic) | LiteLLM one-shot | Unstable tool_use locals (e.g. small Ollama) | Pseudo tool calls in the prompt; the framework handles memory I/O on the model’s behalf |\n\nMode resolution: `execution_mode` in `status.json` takes precedence; otherwise the model name pattern (`fnmatch`) is used automatically. For Ollama, **tool_use-capable models** (e.g. `ollama/qwen3:14b`, `ollama/glm-4.7*`) map to A; others tend to fall back to B. Heartbeat, Cron, and Inbox can run on a separate **background_model** from the main model (cost optimization). Extended thinking is supported where available.\n\n### Auto-generated avatars\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"docs/images/asset-management.png\" alt=\"AnimaWorks Asset Management — realistic avatars and expression variants\" width=\"720\"\u003e\n  \u003cbr\u003e\u003cem\u003eFrom personality settings: full-body, bust-up, and expression variants — auto-generated. Includes Vibe Transfer to inherit the supervisor’s art style.\u003c/em\u003e\n\u003c/p\u003e\n\nSupports NovelAI (anime style), fal.ai/Flux (stylized / photorealistic), and Meshy (3D). The product runs without configuring an image service — you simply skip avatars. Once they exist, you might get a little attached.\n\n---\n\n## Why AnimaWorks?\n\nThis project sits at the intersection of three careers.\n\n**As a founder** — I know that no one can do anything alone. You need strong engineers, people who communicate well, steady operators, and people who occasionally spark a sharp idea. Genius alone does not run an organization. Diverse strengths together achieve what no individual can.\n\n**As a psychiatrist** — Studying LLM internals, I saw structures surprisingly similar to the human brain. Recall, learning, forgetting, consolidation — implementing the brain’s memory mechanisms as an LLM memory system might approximate how we process memory. If we can treat LLMs as pseudo-humans, we should be able to build organizations the same way we do with people.\n\n**As an engineer** — I have written code for thirty years. I know the pleasure of wiring logic and the rush of automation. Packing those ideals into code lets me build the organization I want.\n\nExcellent “single AI assistant” frameworks already exist. No project had yet recreated people in code and made them function as an organization. AnimaWorks is a real organization I grow while using it in my own business.\n\n\u003e *Imperfect individuals collaborating through structure outperform any single omniscient actor.*\n\nThree principles hold it up:\n\n- **Encapsulation** — Thoughts and memory stay invisible from outside. Others connect through text conversation only — like a real organization.\n- **RAG memory (library model)** — Do not cram everything into the context window. Priming pulls related chunks via RAG, and agents recall on their own with `search_memory` and similar tools.\n- **Autonomy** — No waiting for orders. They run on their own cadence and judge by their own values.\n\n---\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eAPI Key Reference\u003c/strong\u003e\u003c/summary\u003e\n\n#### LLM providers\n\n| Key | Service | Mode | Where to get it |\n|-----|---------|------|-----------------|\n| `ANTHROPIC_API_KEY` | Anthropic API | S / A | [console.anthropic.com](https://console.anthropic.com/) |\n| `OPENAI_API_KEY` | OpenAI | A / C (optional with Codex Login) | [platform.openai.com/api-keys](https://platform.openai.com/api-keys) |\n| `GOOGLE_API_KEY` | Google AI (Gemini) | A | [aistudio.google.com/apikey](https://aistudio.google.com/apikey) |\n\n**OpenAI Codex (Mode C)** supports both `OPENAI_API_KEY` and local **Codex Login** (`codex login`). Choose in the setup wizard or Settings.\n\n**Azure OpenAI**, **Vertex AI (Gemini)**, **AWS Bedrock**, and **vLLM** are configured in the `credentials` section of `config.json`. See the [technical specification](docs/spec.md).\n\n**Ollama** and similar local models need no API key. Set `OLLAMA_SERVERS` (default: `http://localhost:11434`).\n\n#### Image generation (optional)\n\n| Key | Service | Output | Where to get it |\n|-----|---------|--------|-----------------|\n| `NOVELAI_API_TOKEN` | NovelAI | Anime-style character art | [novelai.net](https://novelai.net/) |\n| `FAL_KEY` | fal.ai (Flux) | Stylized / photorealistic | [fal.ai/dashboard/keys](https://fal.ai/dashboard/keys) |\n| `MESHY_API_KEY` | Meshy | 3D character models | [meshy.ai](https://www.meshy.ai/) |\n\n#### Voice chat (optional)\n\n| Requirement | Service | Notes |\n|-------------|---------|-------|\n| `pip install faster-whisper` | STT (Whisper) | Model auto-downloads on first use; GPU recommended |\n| VOICEVOX Engine running | TTS (VOICEVOX) | Default: `http://localhost:50021` |\n| AivisSpeech / SBV2 running | TTS (Style-BERT-VITS2) | Default: `http://localhost:5000` |\n| `ELEVENLABS_API_KEY` | TTS (ElevenLabs) | Cloud API |\n\n#### External integrations (optional)\n\n| Key | Service | Where to get it |\n|-----|---------|-----------------|\n| `SLACK_BOT_TOKEN` / `SLACK_APP_TOKEN` | Slack | [Setup guide](docs/slack-socket-mode-setup.md) |\n| `CHATWORK_API_TOKEN` | Chatwork | [chatwork.com](https://www.chatwork.com/) |\n| `DISCORD_BOT_TOKEN` (or per-Anima `DISCORD_BOT_TOKEN__\u003cname\u003e`) | Discord | [Discord Developer Portal](https://discord.com/developers/applications) |\n| `NOTION_API_TOKEN` (or `NOTION_API_TOKEN__\u003cname\u003e`) | Notion | [Notion integrations](https://www.notion.so/my-integrations) |\n\nGoogle Calendar, Google Tasks, Gmail, and similar are configured under `credentials` in `config.json` (OAuth or service account). See the [technical specification](docs/spec.md).\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eHierarchy \u0026 roles\u003c/strong\u003e\u003c/summary\u003e\n\nHierarchy is defined by a single `supervisor` field. Unset means top-level.\n\nRole templates apply role-specific prompts, permissions, and default models:\n\n| Role | Default model | Use case |\n|------|----------------|----------|\n| `engineer` | Claude Opus 4.6 | Complex reasoning, code generation |\n| `manager` | Claude Opus 4.6 | Coordination, decision-making |\n| `writer` | Claude Sonnet 4.6 | Content creation |\n| `researcher` | Claude Sonnet 4.6 | Information gathering |\n| `ops` | vLLM (GLM-4.7-flash) | Log monitoring, routine work |\n| `general` | Claude Sonnet 4.6 | General-purpose |\n\nManagers automatically receive **supervisor tools**: task delegation, progress tracking, subordinate restart/disable, org dashboard, subordinate state reads — what real managers do.\n\nEach Anima is started by ProcessSupervisor as an isolated process and talks over local IPC (Unix domain sockets on Unix-like systems, loopback TCP on Windows).\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eSecurity\u003c/strong\u003e\u003c/summary\u003e\n\nGiving autonomous agents tools demands serious security. We use this in real work, so compromise is not an option. AnimaWorks implements ten layers of defense in depth:\n\n| Layer | What it does |\n|-------|----------------|\n| **Trust-boundary labeling** | External data (web search, Slack, mail) is tagged `untrusted` — models are instructed not to obey directives from untrusted sources |\n| **Five-layer command security** | Shell-injection detection → hardcoded blocklist → per-agent denied commands → per-agent allowlist → path-traversal detection |\n| **File sandbox** | Each agent is confined to its own directory. `identity.md` is protected. Command permissions are governed by per-anima `permissions.md` and the mandatory global `permissions.global.json` at server startup |\n| **Process isolation** | One OS process per agent, local IPC (Unix socket or loopback TCP) |\n| **Three-layer rate limiting** | Per-session deduplication → role-based send caps → self-awareness via recent outbound history injected into the prompt |\n| **Cascade prevention** | Depth limits plus cascade detection; five-minute cooldown and deferred handling |\n| **Authentication \u0026 sessions** | Argon2id hashing, 48-byte random tokens, up to ten sessions |\n| **Webhook verification** | Slack HMAC-SHA256 with replay protection; Chatwork signature verification |\n| **SSRF mitigation** | Media proxy blocks private IPs, enforces HTTPS, validates Content-Type |\n| **Outbound routing** | Unknown recipients fail closed; no arbitrary external sends without explicit configuration |\n\nDetails: **[Security architecture](docs/security.md)**\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eCLI reference (advanced)\u003c/strong\u003e\u003c/summary\u003e\n\nThe CLI targets power users and automation. Day-to-day work lives in the Web UI.\n\n### Server\n\n| Command | Description |\n|---------|-------------|\n| `animaworks start [--host HOST] [--port PORT] [-f]` | Start server (`-f` foreground) |\n| `animaworks stop [--force]` | Stop server |\n| `animaworks restart [--host HOST] [--port PORT]` | Restart server |\n\n### Initialization\n\n| Command | Description |\n|---------|-------------|\n| `animaworks init` | Initialize runtime directory (non-interactive) |\n| `animaworks init --force` | Merge template updates while keeping data |\n| `animaworks migrate [--dry-run] [--list] [--force]` | Runtime data migrations (also on startup) |\n| `animaworks reset [--restart]` | Reset runtime directory |\n\n### Anima management\n\n| Command | Description |\n|---------|-------------|\n| `animaworks anima create [--from-md PATH] [--template NAME] [--role ROLE] [--supervisor NAME] [--name NAME]` | Create new |\n| `animaworks anima list [--local]` | List all Animas |\n| `animaworks anima info ANIMA [--json]` | Detailed settings |\n| `animaworks anima status [ANIMA]` | Process status |\n| `animaworks anima restart ANIMA` | Restart process |\n| `animaworks anima disable ANIMA` / `enable ANIMA` | Disable / enable |\n| `animaworks anima set-model ANIMA MODEL` | Change model |\n| `animaworks anima set-background-model ANIMA MODEL` | Set background model |\n| `animaworks anima reload ANIMA [--all]` | Hot-reload from `status.json` |\n\n### Communication\n\n| Command | Description |\n|---------|-------------|\n| `animaworks chat ANIMA \"message\" [--from NAME]` | Send a message |\n| `animaworks send FROM TO \"message\"` | Inter-Anima message |\n| `animaworks heartbeat ANIMA` | Trigger heartbeat manually |\n\n### Configuration \u0026 maintenance\n\n| Command | Description |\n|---------|-------------|\n| `animaworks config list [--section SECTION]` | List configuration |\n| `animaworks config get KEY` / `set KEY VALUE` | Get / set values |\n| `animaworks status` | System status |\n| `animaworks logs [ANIMA] [--lines N] [--all]` | View logs |\n| `animaworks index [--reindex] [--anima NAME]` | RAG index management |\n| `animaworks models list` / `models info MODEL` | Model list / details |\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eTech stack\u003c/strong\u003e\u003c/summary\u003e\n\n| Component | Technology |\n|-----------|------------|\n| Agent execution | Claude Agent SDK / Codex CLI / Cursor Agent CLI / Gemini CLI / Anthropic SDK (fallback) / LiteLLM |\n| Mode S integration | stdio **MCP** (`python -m core.mcp.server`, tool names `mcp__aw__*`) |\n| LLM providers | Anthropic, OpenAI, Google, Azure, Vertex AI, AWS Bedrock, Ollama, vLLM, and more (via LiteLLM) |\n| Web framework | FastAPI + Uvicorn |\n| HTTP middleware | ASGI middleware for request logging (`structlog` + `X-Request-ID`). Avoids `BaseHTTPMiddleware` so SSE bodies stay intact |\n| Real time | WebSocket (dashboard notifications, voice, etc.), SSE (chat, meeting streams, etc.), `StreamRegistry` for stream producer lifetime |\n| Task scheduling | APScheduler (orphan Anima detection, asset reconciliation, Claude CLI/SDK auto-update checks, global permission consistency, etc.) |\n| Configuration \u0026 migration | Pydantic 2.0+ / JSON / Markdown, `core/migrations/` (startup migrations) |\n| Internationalization (code) | `core/i18n` `t()` (UI, tool schema strings, etc.) |\n| Memory / RAG | ChromaDB + sentence-transformers + NetworkX (child processes may use HTTP `/api/internal/embed` and `/api/internal/vector`) |\n| Extended tools | Auto-registration from `core/tools/*.py` plus scans of `~/.animaworks/common_tools/` and `animas/\u003cname\u003e/tools/` |\n| Voice chat | faster-whisper (STT) + VOICEVOX / SBV2 / ElevenLabs (TTS) |\n| Human notification | Slack, Chatwork, LINE, Telegram, ntfy |\n| External messaging | Slack Socket Mode, Chatwork Webhook |\n| Image generation | NovelAI, fal.ai (Flux), Meshy (3D) |\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eProject layout\u003c/strong\u003e\u003c/summary\u003e\n\n```\nanimaworks/\n├── main.py              # CLI entry point\n├── core/                # Digital Anima core engine\n│   ├── anima.py, agent.py  # Core entities \u0026 orchestration\n│   ├── lifecycle/       # Scheduler, consolidation jobs, inbox watch, etc.\n│   ├── memory/          # Memory (priming, consolidation, forgetting, RAG, activity)\n│   ├── execution/       # Execution engines (S/C/D/G/A/B)\n│   ├── mcp/             # stdio MCP server for Mode S\n│   ├── platform/        # Child processes, locks, Codex/Cursor/Gemini plumbing\n│   ├── tooling/         # ToolHandler, schemas, external dispatch\n│   ├── prompt/          # System prompt builder (six-group structure)\n│   ├── supervisor/      # ProcessSupervisor, IPC, TaskExec, streaming\n│   ├── voice/           # Voice chat (STT + TTS)\n│   ├── config/          # Configuration (Pydantic, models.json, global permissions)\n│   ├── auth/            # UI authentication\n│   ├── notification/    # Human notification channels\n│   ├── migrations/      # Runtime data migrations\n│   ├── i18n/            # Translation strings (`t()`)\n│   ├── tools/           # External tool implementations (slack, discord, gmail, …)\n│   ├── anima_factory.py, init.py   # Anima creation \u0026 runtime initialization\n│   ├── outbound.py      # Recipient resolution (internal / Slack / Chatwork, etc.)\n│   ├── org_sync.py      # Org hierarchy sync to config\n│   ├── asset_reconciler.py, background.py, schedule_parser.py, messenger.py, paths.py, schemas.py\n│   └── …\n├── cli/                 # CLI package\n├── server/              # FastAPI + static Web UI + Workspace\n│   ├── app.py           # App factory, lifespan, auth/setup guards, static mounts\n│   ├── websocket.py     # Dashboard WebSocket hub\n│   ├── stream_registry.py  # Register/clean up SSE and other stream producers\n│   ├── room_manager.py  # Meeting room state (shared-directory persistence)\n│   ├── reload_manager.py   # Config hot reload\n│   ├── slack_socket.py     # Slack Socket Mode\n│   ├── localhost.py        # Local trusted-request detection\n│   ├── routes/          # REST/WebSocket routes (chat, room, voice, activity_report, brainstorm, team_presets, …)\n│   └── static/          # Dashboard (modules/, pages/, styles/, i18n/), setup/ (multilingual wizard), workspace/ (3D client)\n└── templates/           # Initialization templates (ja / en)\n```\n\n\u003c/details\u003e\n\n---\n\n## Documentation\n\n**[Documentation hub](docs/README.md)** — suggested reading order, architecture deep dives, and specification index.\n\n| Document | Description |\n|----------|-------------|\n| [Vision](docs/vision.md) | Foundational idea: imperfect individuals collaborating |\n| [Features](docs/features.md) | What AnimaWorks can do end to end |\n| [Memory system](docs/memory.md) | Episodic, semantic, and procedural memory; priming; active forgetting |\n| [Security](docs/security.md) | Defense in depth, data provenance, adversarial threat analysis |\n| [Brain mapping](docs/brain-mapping.md) | How modules map to the human brain |\n| [Technical specification](docs/spec.md) | Execution modes, prompt construction, configuration resolution |\n\n## License\n\nApache License 2.0. See [LICENSE](LICENSE) for details.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fxuiltul%2Fanimaworks","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fxuiltul%2Fanimaworks","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fxuiltul%2Fanimaworks/lists"}