{"id":47633488,"url":"https://github.com/martian-engineering/lossless-claw","last_synced_at":"2026-04-05T17:01:20.632Z","repository":{"id":339654875,"uuid":"1161277811","full_name":"Martian-Engineering/lossless-claw","owner":"Martian-Engineering","description":"Lossless Claw — LCM (Lossless Context Management) plugin for OpenClaw","archived":false,"fork":false,"pushed_at":"2026-04-02T00:28:28.000Z","size":10647,"stargazers_count":3896,"open_issues_count":77,"forks_count":305,"subscribers_count":22,"default_branch":"main","last_synced_at":"2026-04-02T07:58:13.959Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Martian-Engineering.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":"AGENTS.md","dco":null,"cla":null}},"created_at":"2026-02-18T23:36:36.000Z","updated_at":"2026-04-02T07:52:30.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/Martian-Engineering/lossless-claw","commit_stats":null,"previous_names":["martian-engineering/lossless-claw"],"tags_count":9,"template":false,"template_full_name":null,"purl":"pkg:github/Martian-Engineering/lossless-claw","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Martian-Engineering%2Flossless-claw","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Martian-Engineering%2Flossless-claw/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Martian-Engineering%2Flossless-claw/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Martian-Engineering%2Flossless-claw/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Martian-Engineering","download_url":"https://codeload.github.com/Martian-Engineering/lossless-claw/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Martian-Engineering%2Flossless-claw/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31442924,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-05T15:22:31.103Z","status":"ssl_error","status_checked_at":"2026-04-05T15:22:00.205Z","response_time":75,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2026-04-01T23:55:19.084Z","updated_at":"2026-04-05T17:01:20.626Z","avatar_url":"https://github.com/Martian-Engineering.png","language":"TypeScript","readme":"# lossless-claw\n\nLossless Context Management plugin for [OpenClaw](https://github.com/openclaw/openclaw), based on the [LCM paper](https://papers.voltropy.com/LCM) from [Voltropy](https://x.com/Voltropy). Replaces OpenClaw's built-in sliding-window compaction with a DAG-based summarization system that preserves every message while keeping active context within model token limits.\n\n## Table of contents\n\n- [What it does](#what-it-does)\n- [Quick start](#quick-start)\n- [Configuration](#configuration)\n- [Documentation](#documentation)\n- [Development](#development)\n- [License](#license)\n\n## What it does\n\nTwo ways to learn: read the below, or [check out this super cool animated visualization](https://losslesscontext.ai).\n\nWhen a conversation grows beyond the model's context window, OpenClaw (just like all of the other agents) normally truncates older messages. LCM instead:\n\n1. **Persists every message** in a SQLite database, organized by conversation\n2. **Summarizes chunks** of older messages into summaries using your configured LLM\n3. **Condenses summaries** into higher-level nodes as they accumulate, forming a DAG (directed acyclic graph)\n4. **Assembles context** each turn by combining summaries + recent raw messages\n5. **Provides tools** (`lcm_grep`, `lcm_describe`, `lcm_expand`) so agents can search and recall details from compacted history\n\nNothing is lost. Raw messages stay in the database. Summaries link back to their source messages. Agents can drill into any summary to recover the original detail.\n\n**It feels like talking to an agent that never forgets. Because it doesn't. In normal operation, you'll never need to think about compaction again.**\n\n## Quick start\n\n### Prerequisites\n\n- OpenClaw with plugin context engine support\n- Node.js 22+\n- An LLM provider configured in OpenClaw (used for summarization)\n\n### Install the plugin\n\nUse OpenClaw's plugin installer (recommended):\n\n```bash\nopenclaw plugins install @martian-engineering/lossless-claw\n```\n\nIf you're running from a local OpenClaw checkout, use:\n\n```bash\npnpm openclaw plugins install @martian-engineering/lossless-claw\n```\n\nFor local plugin development, link your working copy instead of copying files:\n\n```bash\nopenclaw plugins install --link /path/to/lossless-claw\n# or from a local OpenClaw checkout:\n# pnpm openclaw plugins install --link /path/to/lossless-claw\n```\n\nThe install command records the plugin, enables it, and applies compatible slot selection (including `contextEngine` when applicable).\n\n\u003e **Note:** If your OpenClaw config uses `plugins.allow`, make sure both `lossless-claw` and any active plugins you rely on remain allowlisted. In some setups, narrowing the allowlist can prevent plugin-backed integrations from loading, even if `lossless-claw` itself is installed correctly. Restart the gateway after plugin config changes.\n\n### Configure OpenClaw\n\nIn most cases, no manual JSON edits are needed after `openclaw plugins install`.\n\nIf you need to set it manually, ensure the context engine slot points at lossless-claw:\n\n```json\n{\n  \"plugins\": {\n    \"slots\": {\n      \"contextEngine\": \"lossless-claw\"\n    }\n  }\n}\n```\n\nRestart OpenClaw after configuration changes.\n\n## Configuration\n\nLCM is configured through a combination of plugin config and environment variables. Environment variables take precedence for backward compatibility.\n\n### Plugin config\n\nAdd a `lossless-claw` entry under `plugins.entries` in your OpenClaw config:\n\n```json\n{\n  \"plugins\": {\n    \"entries\": {\n      \"lossless-claw\": {\n        \"enabled\": true,\n        \"config\": {\n          \"freshTailCount\": 64,\n          \"leafChunkTokens\": 80000,\n          \"newSessionRetainDepth\": 2,\n          \"contextThreshold\": 0.75,\n          \"incrementalMaxDepth\": 1,\n          \"ignoreSessionPatterns\": [\n            \"agent:*:cron:**\"\n          ],\n          \"summaryModel\": \"anthropic/claude-haiku-4-5\",\n          \"expansionModel\": \"anthropic/claude-haiku-4-5\",\n          \"delegationTimeoutMs\": 300000\n        }\n      }\n    }\n  }\n}\n```\n\n`leafChunkTokens` controls how many source tokens can accumulate in a leaf compaction chunk before summarization is triggered. The default is `20000`, but quota-limited summary providers may benefit from a larger value to reduce compaction frequency. `summaryModel` and `summaryProvider` let you pin compaction summarization to a cheaper or faster model than your main OpenClaw session model. `expansionModel` does the same for `lcm_expand_query` sub-agent calls (drilling into summaries to recover detail). `delegationTimeoutMs` controls how long `lcm_expand_query` waits for that delegated sub-agent to finish before returning a timeout error; it defaults to `120000` (120s). When unset, the model settings still fall back to OpenClaw's configured default model/provider. See [Expansion model override requirements](#expansion-model-override-requirements) for the required `subagent` trust policy when using `expansionModel`.\n\n### Environment variables\n\n| Variable | Default | Description |\n|----------|---------|-------------|\n| `LCM_ENABLED` | `true` | Enable/disable the plugin |\n| `LCM_DATABASE_PATH` | `~/.openclaw/lcm.db` | Path to the SQLite database |\n| `LCM_IGNORE_SESSION_PATTERNS` | `\"\"` | Comma-separated glob patterns for session keys to exclude from LCM storage |\n| `LCM_STATELESS_SESSION_PATTERNS` | `\"\"` | Comma-separated glob patterns for session keys that may read from LCM but never write to it |\n| `LCM_SKIP_STATELESS_SESSIONS` | `true` | Enable stateless-session write skipping for matching session keys |\n| `LCM_CONTEXT_THRESHOLD` | `0.75` | Fraction of context window that triggers compaction (0.0–1.0) |\n| `LCM_FRESH_TAIL_COUNT` | `64` | Number of recent messages protected from compaction |\n| `LCM_NEW_SESSION_RETAIN_DEPTH` | `2` | Context retained after `/new` (`-1` keeps all context, `2` keeps d2+) |\n| `LCM_LEAF_MIN_FANOUT` | `8` | Minimum raw messages per leaf summary |\n| `LCM_CONDENSED_MIN_FANOUT` | `4` | Minimum summaries per condensed node |\n| `LCM_CONDENSED_MIN_FANOUT_HARD` | `2` | Relaxed fanout for forced compaction sweeps |\n| `LCM_INCREMENTAL_MAX_DEPTH` | `1` | How deep incremental compaction goes (0 = leaf only, 1 = one condensed pass, -1 = unlimited) |\n| `LCM_LEAF_CHUNK_TOKENS` | `20000` | Max source tokens per leaf compaction chunk |\n| `LCM_LEAF_TARGET_TOKENS` | `1200` | Target token count for leaf summaries |\n| `LCM_CONDENSED_TARGET_TOKENS` | `2000` | Target token count for condensed summaries |\n| `LCM_MAX_EXPAND_TOKENS` | `4000` | Token cap for sub-agent expansion queries |\n| `LCM_LARGE_FILE_TOKEN_THRESHOLD` | `25000` | File blocks above this size are intercepted and stored separately |\n| `LCM_LARGE_FILE_SUMMARY_PROVIDER` | `\"\"` | Provider override for large-file summarization |\n| `LCM_LARGE_FILE_SUMMARY_MODEL` | `\"\"` | Model override for large-file summarization |\n| `LCM_SUMMARY_MODEL` | `\"\"` | Model override for compaction summarization; falls back to OpenClaw's default model when unset |\n| `LCM_SUMMARY_PROVIDER` | `\"\"` | Provider override for compaction summarization; falls back to `OPENCLAW_PROVIDER` or the provider embedded in the model ref |\n| `LCM_SUMMARY_BASE_URL` | *(from OpenClaw / provider default)* | Base URL override for summarization API calls |\n| `LCM_EXPANSION_MODEL` | *(from OpenClaw)* | Model override for `lcm_expand_query` sub-agent (e.g. `anthropic/claude-haiku-4-5`) |\n| `LCM_EXPANSION_PROVIDER` | *(from OpenClaw)* | Provider override for `lcm_expand_query` sub-agent |\n| `LCM_DELEGATION_TIMEOUT_MS` | `120000` | Max time to wait for delegated `lcm_expand_query` sub-agent completion |\n| `LCM_AUTOCOMPACT_DISABLED` | `false` | Disable automatic compaction after turns |\n| `LCM_PRUNE_HEARTBEAT_OK` | `false` | Retroactively delete `HEARTBEAT_OK` turn cycles from LCM storage |\n\n### Expansion model override requirements\n\nIf you want `lcm_expand_query` to run on a dedicated model via `expansionModel` or `LCM_EXPANSION_MODEL`, OpenClaw must explicitly trust the plugin to request sub-agent model overrides.\n\nAdd a `subagent` policy under `plugins.entries.lossless-claw` and allowlist the canonical `provider/model` target you want the plugin to use:\n\n```json\n{\n  \"models\": {\n    \"openai/gpt-4.1-mini\": {}\n  },\n  \"plugins\": {\n    \"entries\": {\n      \"lossless-claw\": {\n        \"enabled\": true,\n        \"subagent\": {\n          \"allowModelOverride\": true,\n          \"allowedModels\": [\"openai/gpt-4.1-mini\"]\n        },\n        \"config\": {\n          \"expansionModel\": \"openai/gpt-4.1-mini\"\n        }\n      }\n    }\n  }\n}\n```\n\n- `subagent.allowModelOverride` is required for OpenClaw to honor plugin-requested per-run `provider`/`model` overrides.\n- `subagent.allowedModels` is optional but recommended. Use `\"*\"` only if you intentionally want to trust any target model.\n- The chosen expansion target must also be available in OpenClaw's normal model catalog. If it is not already configured elsewhere, add it under the top-level `models` map as shown above.\n- If you prefer splitting provider and model, set `config.expansionProvider` and use a bare `config.expansionModel`.\n\nPlugin config equivalents:\n\n- `ignoreSessionPatterns`\n- `statelessSessionPatterns`\n- `skipStatelessSessions`\n- `newSessionRetainDepth`\n- `summaryModel`\n- `summaryProvider`\n- `delegationTimeoutMs`\n\nEnvironment variables still win over plugin config when both are set.\n\n### Summary model priority\n\nFor compaction summarization, lossless-claw resolves the model in this order:\n\n1. `LCM_SUMMARY_MODEL` / `LCM_SUMMARY_PROVIDER`\n2. Plugin config `summaryModel` / `summaryProvider`\n3. OpenClaw's default compaction model/provider\n4. Legacy per-call model/provider hints\n\nIf `summaryModel` already includes a provider prefix such as `anthropic/claude-sonnet-4-20250514`, `summaryProvider` is ignored for that choice. Otherwise, the provider falls back to the matching override, then `OPENCLAW_PROVIDER`, then the provider inferred by the caller.\n\n### Recommended starting configuration\n\n```\nLCM_FRESH_TAIL_COUNT=64\nLCM_LEAF_CHUNK_TOKENS=20000\nLCM_INCREMENTAL_MAX_DEPTH=1\nLCM_CONTEXT_THRESHOLD=0.75\n```\n\n- **freshTailCount=64** protects the last 64 messages from compaction, giving the model more recent context for continuity.\n- **leafChunkTokens=20000** limits how large each leaf compaction chunk can grow before LCM summarizes it. Increase this when your summary provider is quota-limited and frequent leaf compactions are exhausting that quota.\n- **incrementalMaxDepth=1** runs one condensed pass after each leaf compaction by default. Set to `0` for leaf-only behavior, a larger positive integer for a deeper cap, or `-1` for unlimited cascading.\n- **contextThreshold=0.75** triggers compaction when context reaches 75% of the model's window, leaving headroom for the model's response.\n\n### Session exclusion patterns\n\n### Session reset semantics\n\nLossless-claw distinguishes OpenClaw's two session-reset commands:\n\n- `/new` keeps the active conversation row and all stored summaries, but prunes `context_items` so the next turn rebuilds context from retained summaries instead of the fresh tail.\n- `/reset` archives the active conversation row and creates a new active row for the same stable `sessionKey`, giving the next turn a clean LCM conversation while preserving prior history.\n\n`newSessionRetainDepth` (or `LCM_NEW_SESSION_RETAIN_DEPTH`) controls how much summary structure survives `/new`:\n\n- `-1`: keep all existing context items\n- `0`: keep all summaries, drop only fresh-tail messages\n- `1`: keep d1+ summaries\n- `2`: keep d2+ summaries; recommended default\n- `3+`: keep only deeper, more abstract summaries\n\nLossless-claw currently applies these storage semantics through the `before_reset` hook only. User-facing confirmation text after `/new` or `/reset` must be emitted by OpenClaw's command handlers.\n\nUse `ignoreSessionPatterns` or `LCM_IGNORE_SESSION_PATTERNS` to keep low-value sessions completely out of LCM. Matching sessions do not create conversations, do not store messages, and do not participate in compaction or delegated expansion grants.\n\nPattern rules:\n\n- `*` matches any characters except `:`\n- `**` matches anything, including `:`\n- Patterns match the full session key\n\nExamples:\n\n- `agent:*:cron:**` excludes cron sessions for any agent, including isolated run sessions like `agent:main:cron:daily-digest:run:run-123`\n- `agent:main:subagent:**` excludes all main-agent subagent sessions\n- `agent:ops:**` excludes every session under the `ops` agent id\n\nEnvironment variable example:\n\n```bash\nLCM_IGNORE_SESSION_PATTERNS=agent:*:cron:**,agent:main:subagent:**\n```\n\nPlugin config example:\n\n```json\n{\n  \"plugins\": {\n    \"entries\": {\n      \"lossless-claw\": {\n        \"config\": {\n          \"ignoreSessionPatterns\": [\n            \"agent:*:cron:**\",\n            \"agent:main:subagent:**\"\n          ]\n        }\n      }\n    }\n  }\n}\n```\n\n### Stateless session patterns\n\nUse `statelessSessionPatterns` or `LCM_STATELESS_SESSION_PATTERNS` for sessions that should still be able to read from existing LCM context, but should never create or mutate LCM state themselves. This is useful for delegated or temporary sub-agent sessions that should benefit from retained context without polluting the database.\n\nWhen `skipStatelessSessions` or `LCM_SKIP_STATELESS_SESSIONS` is enabled, matching sessions:\n\n- skip bootstrap imports\n- skip message persistence during ingest and after-turn hooks\n- skip compaction writes and delegated expansion grant writes\n- can still assemble context from already-persisted conversations when a matching conversation exists\n\nPattern rules are the same as `ignoreSessionPatterns`, and matching is done against the full session key.\n\nEnvironment variable example:\n\n```bash\nLCM_STATELESS_SESSION_PATTERNS=agent:*:subagent:**,agent:ops:subagent:**\nLCM_SKIP_STATELESS_SESSIONS=true\n```\n\nPlugin config example:\n\n```json\n{\n  \"plugins\": {\n    \"entries\": {\n      \"lossless-claw\": {\n        \"config\": {\n          \"statelessSessionPatterns\": [\n            \"agent:*:subagent:**\",\n            \"agent:ops:subagent:**\"\n          ],\n          \"skipStatelessSessions\": true\n        }\n      }\n    }\n  }\n}\n```\n\n### OpenClaw session reset settings\n\nLCM preserves history through compaction, but it does **not** change OpenClaw's core session reset policy. If sessions are resetting sooner than you want, increase OpenClaw's `session.reset.idleMinutes` or use a channel/type-specific override.\n\n```json\n{\n  \"session\": {\n    \"reset\": {\n      \"mode\": \"idle\",\n      \"idleMinutes\": 10080\n    }\n  }\n}\n```\n\n- `session.reset.mode: \"idle\"` keeps a session alive until the idle window expires.\n- `session.reset.idleMinutes` is the actual reset interval in minutes.\n- OpenClaw does **not** currently enforce a maximum `idleMinutes`; in source it is validated only as a positive integer.\n- If you also use daily reset mode, `idleMinutes` acts as a secondary guard and the session resets when **either** the daily boundary or the idle window is reached first.\n- Legacy `session.idleMinutes` still works, but OpenClaw prefers `session.reset.idleMinutes`.\n\nUseful values:\n\n- `1440` = 1 day\n- `10080` = 7 days\n- `43200` = 30 days\n- `525600` = 365 days\n\nFor most long-lived LCM setups, a good starting point is:\n\n```json\n{\n  \"session\": {\n    \"reset\": {\n      \"mode\": \"idle\",\n      \"idleMinutes\": 10080\n    }\n  }\n}\n```\n\n## Documentation\n\n- [Configuration guide](docs/configuration.md)\n- [Architecture](docs/architecture.md)\n- [Agent tools](docs/agent-tools.md)\n- [TUI Reference](docs/tui.md)\n- [lcm-tui](tui/README.md)\n- [Optional: enable FTS5 for fast full-text search](docs/fts5.md)\n\n## Development\n\n```bash\n# Run tests\nnpx vitest\n\n# Type check\nnpx tsc --noEmit\n\n# Run a specific test file\nnpx vitest test/engine.test.ts\n```\n\n### Project structure\n\n```\nindex.ts                    # Plugin entry point and registration\nsrc/\n  engine.ts                 # LcmContextEngine — implements ContextEngine interface\n  assembler.ts              # Context assembly (summaries + messages → model context)\n  compaction.ts             # CompactionEngine — leaf passes, condensation, sweeps\n  summarize.ts              # Depth-aware prompt generation and LLM summarization\n  retrieval.ts              # RetrievalEngine — grep, describe, expand operations\n  expansion.ts              # DAG expansion logic for lcm_expand_query\n  expansion-auth.ts         # Delegation grants for sub-agent expansion\n  expansion-policy.ts       # Depth/token policy for expansion\n  large-files.ts            # File interception, storage, and exploration summaries\n  integrity.ts              # DAG integrity checks and repair utilities\n  transcript-repair.ts      # Tool-use/result pairing sanitization\n  types.ts                  # Core type definitions (dependency injection contracts)\n  openclaw-bridge.ts        # Bridge utilities\n  db/\n    config.ts               # LcmConfig resolution from env vars\n    connection.ts           # SQLite connection management\n    migration.ts            # Schema migrations\n  store/\n    conversation-store.ts   # Message persistence and retrieval\n    summary-store.ts        # Summary DAG persistence and context item management\n    fts5-sanitize.ts        # FTS5 query sanitization\n  tools/\n    lcm-grep-tool.ts        # lcm_grep tool implementation\n    lcm-describe-tool.ts    # lcm_describe tool implementation\n    lcm-expand-tool.ts      # lcm_expand tool (sub-agent only)\n    lcm-expand-query-tool.ts # lcm_expand_query tool (main agent wrapper)\n    lcm-conversation-scope.ts # Conversation scoping utilities\n    common.ts               # Shared tool utilities\ntest/                       # Vitest test suite\nspecs/                      # Design specifications\nopenclaw.plugin.json        # Plugin manifest with config schema and UI hints\ntui/                        # Interactive terminal UI (Go)\n  main.go                   # Entry point and bubbletea app\n  data.go                   # Data loading and SQLite queries\n  dissolve.go               # Summary dissolution\n  repair.go                 # Corrupted summary repair\n  rewrite.go                # Summary re-summarization\n  transplant.go             # Cross-conversation DAG copy\n  prompts/                  # Depth-aware prompt templates\n.goreleaser.yml             # GoReleaser config for TUI binary releases\n```\n\n## License\n\nMIT\n","funding_links":[],"categories":["Trending Repos — 19 March 2026"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmartian-engineering%2Flossless-claw","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmartian-engineering%2Flossless-claw","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmartian-engineering%2Flossless-claw/lists"}