{"id":48871227,"url":"https://github.com/ai-2070/memex","last_synced_at":"2026-04-18T00:00:28.433Z","repository":{"id":350720043,"uuid":"1207583479","full_name":"ai-2070/memex","owner":"ai-2070","description":"MemEX: Multi-session continuity for AI systems. A structured memory substrate that represents knowledge as an evolving belief state, with explicit provenance, scoring, and contradiction-aware updates.","archived":false,"fork":false,"pushed_at":"2026-04-16T03:23:38.000Z","size":181,"stargazers_count":2,"open_issues_count":2,"forks_count":0,"subscribers_count":0,"default_branch":"master","last_synced_at":"2026-04-16T23:03:04.650Z","etag":null,"topics":["ai-agents","ai-memory","cognition","context","graph-memory","llms","memory","multi-session","thinking","tokens"],"latest_commit_sha":null,"homepage":"","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/ai-2070.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-04-11T05:50:02.000Z","updated_at":"2026-04-16T03:23:23.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/ai-2070/memex","commit_stats":null,"previous_names":["ai-2070/memex"],"tags_count":7,"template":false,"template_full_name":null,"purl":"pkg:github/ai-2070/memex","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ai-2070%2Fmemex","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ai-2070%2Fmemex/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ai-2070%2Fmemex/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ai-2070%2Fmemex/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/ai-2070","download_url":"https://codeload.github.com/ai-2070/memex/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ai-2070%2Fmemex/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31950891,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-17T17:29:20.459Z","status":"ssl_error","status_checked_at":"2026-04-17T17:28:47.801Z","response_time":62,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai-agents","ai-memory","cognition","context","graph-memory","llms","memory","multi-session","thinking","tokens"],"created_at":"2026-04-15T22:02:29.709Z","updated_at":"2026-04-18T00:00:28.149Z","avatar_url":"https://github.com/ai-2070.png","language":"TypeScript","readme":"# MemEX — Structured Memory for AI Agents\n\nMulti-session continuity for AI systems.\n\nMemEX stores beliefs, evidence, conflicts, and updates -- not just retrieved text. It gives agents a continuous belief state across sessions instead of fragmented chat logs.\n\n## The Problem\n\nEvery chat session starts from scratch. Memory systems try to fix this by appending text and summarizing when it gets long. But that loses:\n\n- **Why** something is believed (provenance)\n- **How much** to trust it (authority, conviction)\n- **What conflicts** with it (contradictions)\n- **Whether** it's still relevant (decay)\n- **Where** it came from (source attribution)\n\nMost systems conflate \"I can retrieve it\" with \"I know it.\" Retrieval is not memory. MemEX separates recall (a tool problem) from belief state (a knowledge problem).\n\n## What MemEX Does\n\nMemEX is a typed, scored, provenance-tracked graph. Each memory item carries:\n\n- A **kind** -- what it is (observation, assertion, hypothesis, derivation, simulation, policy, trait)\n- A **source_kind** -- how it got here (user-stated, observed, inferred, imported)\n- Three **scores** -- authority (trust), conviction (author confidence), importance (attention priority)\n- **Parents** -- what items it was derived from, forming provenance chains\n- **Edges** -- typed relationships to other items (supports, contradicts, supersedes, alias)\n\nThis means the system can:\n\n- Carry forward beliefs across sessions, not just text\n- Track what was observed vs inferred vs assumed\n- Surface contradictions instead of silently overwriting\n- Explain *why* it believes something (provenance tree)\n- Decay stale context while preserving stable knowledge\n- Recognize that two observations refer to the same entity\n\n## Where MemEX Fits\n\nMemEX is the structured memory layer in a larger stack. It doesn't replace your other tools -- it gives them something better to read from and write to.\n\n```\n┌─────────────────────────────────────────────────┐\n│                  Agent / App                     │\n│                                                  │\n│  ┌──────────┐  ┌──────────┐  ┌───────────────┐  │\n│  │  Chat     │  │ Working  │  │   Cognition   │  │\n│  │  Window   │  │ Memory   │  │   Layer       │  │\n│  │ (sliding) │  │(scratch) │  │  (thinking)   │  │\n│  └────┬─────┘  └────┬─────┘  └──────┬────────┘  │\n│       │              │               │            │\n│       └──────────────┼───────────────┘            │\n│                      │                            │\n│              ┌───────▼────────┐                   │\n│              │     MemEX      │                   │\n│              │  (this library) │                   │\n│              └───────┬────────┘                   │\n│                      │                            │\n│         ┌────────────┼────────────┐               │\n│         │            │            │               │\n│   ┌─────▼─────┐ ┌───▼───┐ ┌─────▼─────┐         │\n│   │  Vector   │ │ Text  │ │ Event     │         │\n│   │  Search   │ │ Search│ │ Store     │         │\n│   └───────────┘ └───────┘ └───────────┘         │\n└─────────────────────────────────────────────────┘\n```\n\n### How the pieces connect\n\n**Chat window (sliding context)** -- the current conversation. As messages flow, the agent extracts observations, assertions, and preferences and writes them to MemEX. The chat window is ephemeral; MemEX is where things persist.\n\n**Working memory (scratchpad)** -- short-lived, high-importance items the agent is actively reasoning about. These live in MemEX with `kind: \"hypothesis\"` or `kind: \"assumption\"` and high `importance`. After processing, their importance decays and they settle into long-term memory.\n\n**Vector / text search** -- MemEX stores structured items, not embeddings. Search tools subscribe to MemEX lifecycle events and maintain their own indexes. Search indexes are derived from MemEX, not the other way around.\n\n**Cognition layer** -- uses `getScoredItems` and `smartRetrieve` to build its thinking queue. Writes back inferred items, resolved contradictions, and updated scores. The agent prioritizes thinking using authority, conviction, and importance.\n\n**Event store** -- the append-only command log. MemEX emits lifecycle events that get persisted. On restart, `replayFromEnvelopes` rebuilds the graph from the log.\n\nMemEX is the system of record. It does not replace retrieval systems -- it governs them. Vector search and keyword search are recall tools; MemEX is the epistemic coordination layer that decides what matters, what conflicts, and what to include in context. The library itself is pure TypeScript with a single runtime dependency (`uuidv7`). Storage, search, and bus integration belong in the service layer above.\n\n### What changes in agent behavior\n\nWithout MemEX, an agent:\n- Forgets between sessions, or retrieves flat text with no trust signal\n- Can't tell if something was observed, inferred, or assumed\n- Silently overwrites old beliefs with new ones\n- Can't explain why it believes something\n- Treats everything as equally important\n\nWith MemEX, an agent:\n- Carries forward a structured belief state across sessions\n- Knows the difference between an observation and a hypothesis\n- Surfaces contradictions instead of hiding them\n- Can trace any belief back to its evidence chain\n- Prioritizes what to think about based on importance and uncertainty\n- Lets stale context fade while stable knowledge persists\n\n## Install\n\n```bash\nnpm install @ai2070/memex\n```\n\nFor runtime validation with Zod schemas (optional):\n\n```bash\nnpm install zod\n```\n\n## Quick Start\n\n```ts\nimport {\n  createGraphState,\n  createMemoryItem,\n  applyCommand,\n  getItems,\n  getScoredItems,\n  smartRetrieve,\n} from \"@ai2070/memex\";\n\n// create an empty graph\nlet state = createGraphState();\n\n// add an observation\nconst obs = createMemoryItem({\n  scope: \"user:laz/general\",\n  kind: \"observation\",\n  content: { key: \"login_count\", value: 42 },\n  author: \"agent:monitor\",\n  source_kind: \"observed\",\n  authority: 0.9,\n  importance: 0.7,\n});\n\nconst result = applyCommand(state, { type: \"memory.create\", item: obs });\nstate = result.state;\n\n// add a hypothesis derived from the observation\nconst hyp = createMemoryItem({\n  scope: \"user:laz/general\",\n  kind: \"hypothesis\",\n  content: { key: \"is_power_user\", value: true },\n  author: \"agent:reasoner\",\n  source_kind: \"agent_inferred\",\n  parents: [obs.id],\n  authority: 0.4,\n  conviction: 0.7,\n  importance: 0.8,\n});\n\nstate = applyCommand(state, { type: \"memory.create\", item: hyp }).state;\n\n// query with filters\nconst recent = getItems(state, {\n  or: [{ kind: \"observation\" }, { kind: \"assertion\" }],\n  range: { authority: { min: 0.5 } },\n  created: { after: Date.now() - 86400000 },\n});\n\n// scored retrieval with time decay\nconst ranked = getScoredItems(\n  state,\n  {\n    authority: 0.5,\n    conviction: 0.3,\n    importance: 0.2,\n    decay: { rate: 0.1, interval: \"day\", type: \"exponential\" },\n  },\n  { pre: { scope: \"user:laz/general\" }, limit: 10 },\n);\n\n// smart retrieval: decay + contradiction surfacing + diversity + budget\nconst context = smartRetrieve(state, {\n  budget: 4096,\n  costFn: (item) =\u003e JSON.stringify(item.content).length,\n  weights: {\n    authority: 0.5,\n    importance: 0.5,\n    decay: { rate: 0.1, interval: \"day\", type: \"exponential\" },\n  },\n  filter: { scope: \"user:laz/general\" },\n  contradictions: \"surface\",\n  diversity: { author_penalty: 0.3 },\n});\n```\n\n## Core Concepts\n\n### Memory Items\n\nNot everything is a \"fact.\" A `MemoryItem` can be an observation, an assertion, an assumption, a hypothesis, a derivation, a simulation, a policy, or a trait. The `kind` field says what it *is*; the `source_kind` field says how it *got here*.\n\n### Three Scores\n\n| Score | Question | Range |\n|-------|----------|-------|\n| `authority` | How much should the system trust this? | 0..1 |\n| `conviction` | How sure was the author? | 0..1 |\n| `importance` | How much attention does this need right now? (salience) | 0..1 |\n\nThese are orthogonal. A hypothesis can be high-importance (matters a lot) but low-authority (not yet verified).\n\n### Time Decay\n\nScores decay over time at query time -- the stored values are not mutated. Configure decay per query:\n\n```ts\n{ rate: 0.1, interval: \"day\", type: \"exponential\" }\n```\n\nThree types: **exponential** (smooth curve, never zero), **linear** (straight to zero), **step** (drops at interval boundaries). You can also filter out items that have decayed below a threshold.\n\n### Provenance\n\nItems can declare **parents** -- the items they were derived or inferred from. This creates provenance chains that let the system explain *why* it believes something:\n\n```ts\ngetSupportSet(state, claimId)\n// -\u003e [claim, parent1, parent2, grandparent1] -- everything that justifies this claim\n```\n\nIf a parent is retracted, `getStaleItems` finds orphaned children. `cascadeRetract` removes the entire dependency chain.\n\n### Contradictions\n\nWhen two items conflict, they can be linked with a `CONTRADICTS` edge. At retrieval time:\n\n- `contradictions: \"filter\"` -- keep the higher-scoring side (clean context)\n- `contradictions: \"surface\"` -- keep both, flagged with `contradicted_by` (agent reasoning)\n\nContradictions can be resolved: `resolveContradiction` creates a `SUPERSEDES` edge and lowers the loser's authority.\n\n### Identity\n\nTwo observations of the same entity can be aliased: `markAlias` creates bidirectional `ALIAS` edges. `getAliasGroup` returns the full identity group via transitive closure.\n\n### Edges\n\nTyped relationships between items:\n\n| Edge | Meaning |\n|------|---------|\n| `DERIVED_FROM` | Relationship discovered after creation |\n| `CONTRADICTS` | Two items assert conflicting things |\n| `SUPPORTS` | Evidence for another item |\n| `ABOUT` | References another item |\n| `SUPERSEDES` | Replaces another item (conflict resolution) |\n| `ALIAS` | Same entity, different observations |\n\n### Events\n\nThree categories, all under `namespace: \"memory\"`:\n\n- **Commands** (imperative): `memory.create`, `memory.update`, `memory.retract`, `edge.create`, `edge.update`, `edge.retract`\n- **Lifecycle** (past tense): `memory.created`, `memory.updated`, `memory.retracted`, `edge.created`, `edge.updated`, `edge.retracted`\n- **State**: `state.memory`, `state.edge`\n\nCommands go in, lifecycle events come out of the reducer, state events are full snapshots for downstream consumers.\n\n### Immutability\n\n`applyCommand` never mutates input state. It returns a new `GraphState` and an array of lifecycle events. History is in the append-only event log; `GraphState` is always the latest snapshot.\n\n## Design Philosophy\n\nEvery system encodes assumptions about truth, knowledge, and time -- whether it acknowledges them or not. MemEX makes those assumptions explicit.\n\n| Question | Typical system | MemEX |\n|----------|---------------|-------|\n| What is knowledge? | Similar text (vectors) or structured facts (SQL) | Beliefs with provenance, confidence, and conflict |\n| What exists? | Documents, rows | Observations, hypotheses, derivations, policies, traits |\n| Is truth binary? | Yes (stored or not) | No -- graded by authority, conviction, and importance |\n| Does knowledge decay? | No (or manually pruned) | Yes -- query-time decay, configurable per retrieval |\n| What about contradictions? | Overwrite or ignore | Represent, carry, and optionally resolve |\n\nMost memory systems compress and resolve -- they produce a single clean narrative. MemEX preserves and represents -- it maintains a field of competing claims that a reasoning layer can interpret. The graph is the pre-answer belief state, not the final answer.\n\nThis is a deliberate architectural choice. MemEX is not a thinking system. It is a substrate that makes thinking systems possible. Storage, search, and cognition belong above the library. MemEX provides the structured epistemic state they operate on.\n\nVector search tells you what is similar. MemEX tells you what you believe.\n\n## Three Graphs\n\nMemEX contains three logical graphs in one package. Use what you need:\n\n| Graph | Purpose | Core type | Namespace |\n|-------|---------|-----------|-----------|\n| **Memory** | Epistemic state -- beliefs, evidence, contradictions | `MemoryItem` | `\"memory\"` |\n| **Intent** | Goals and objectives | `Intent` | `\"intent\"` |\n| **Task** | Units of work tied to intents | `Task` | `\"task\"` |\n\nAll three follow the same pattern: commands → reducer → lifecycle events. They cross-reference by ID:\n\n```ts\n// intent links to memory items that motivated it\nconst intent = createIntent({ label: \"find_kati\", root_memory_ids: [obs.id], ... });\n\n// sub-intent decomposes a parent goal\nconst sub = createIntent({ label: \"check_financials\", parent_id: intent.id, ... });\n\n// task links to its parent intent and memory items it consumes/produces\nconst task = createTask({ intent_id: intent.id, input_memory_ids: [obs.id], ... });\n\n// subtask breaks a task into steps\nconst step = createTask({ intent_id: intent.id, parent_id: task.id, action: \"parse_profile\", ... });\n\n// after task completes, memory items link back\ncreateMemoryItem({ ..., intent_id: intent.id, task_id: task.id });\n```\n\n## The Loop\n\nThe three graphs form a continuous cycle:\n\n```\n    ┌─────────────────────────────────────────┐\n    │                                         │\n    ▼                                         │\n Memory ──────► Intent ──────► Task ──────────┘\n (belief)       (direction)    (execution)\n    │               │              │\n    │  something    │  spawns      │  produces\n    │  important    │  actionable  │  new memory\n    │  or uncertain │  steps       │  (results,\n    │  appears      │              │   failures,\n    │               │              │   observations)\n    └───────────────┘              │\n         updates belief            │\n         state with new ◄──────────┘\n         evidence\n```\n\n1. **Memory produces intents** — an important or uncertain item surfaces, triggering a goal\n2. **Intents spawn tasks** — the goal breaks into actionable steps\n3. **Tasks produce new memory** — results, observations, and failures write back as memory items\n4. **Memory updates belief state** — new evidence resolves contradictions, reinforces or decays existing beliefs\n\nMost AI systems mix these together: goals hidden in prompts, tasks implicit in code, memory as text blobs. MemEX separates them:\n\n| Layer | Responsibility |\n|-------|---------------|\n| Memory | What is believed |\n| Intent | What is wanted |\n| Task | What is done |\n\nEach layer has its own types, commands, reducer, and query — but they reference each other by ID and share the same event envelope pattern. The separation is what makes the loop auditable: you can trace any belief back to the task that produced it, the intent that motivated it, and the evidence it was based on.\n\n## Cognitive Transfer\n\nThe three graphs together form a complete cognitive state that can be serialized, transferred, and resumed by another agent.\n\n```\nAgent A → Agent B:\n\n  Memory export  (what I know)\n+ Intent export  (what I want)\n+ Task export    (what I've tried, what worked, what failed)\n= Complete cognitive state transferred\n```\n\nThis isn't just data migration. The receiving agent inherits:\n\n- **Context** — the belief state (observations, hypotheses, contradictions)\n- **Direction** — active goals and their priorities\n- **Progress** — which approaches were tried, which failed, which are still running\n\nThe agent picks up where the other left off. It doesn't re-derive context from scratch. It doesn't retry failed approaches. It continues.\n\n### The vector model\n\nThink of the three graphs as a cognitive vector:\n\n| Component | Role | Analogy |\n|-----------|------|---------|\n| **Memory** | Origin | Starting point in state space — what is known |\n| **Intent** | Magnitude | How much energy is allocated — priority and importance |\n| **Task** | Direction | Which approaches have been tried — path through solution space |\n\nTransferring cognition between agents is transferring this vector. The receiving agent starts from the same origin (memory), pursues the same goals with the same energy (intent), and avoids the same dead ends (task history).\n\nThis is what `exportSlice` / `importSlice` enables at the library level. The transport layer (network, bus, file) is outside the library; MemEX provides the serializable structure.\n\n### What transplant enables\n\n| Pattern | How it works |\n|---------|-------------|\n| **Safe delegation** | Export a slice to a sub-agent. It operates on its own copy. Merge results back append-only -- no risk of corrupting the main graph. |\n| **Parallel reasoning** | Fork belief state into multiple slices. Run different reasoning paths independently. Compare outcomes before merging. |\n| **Reproducibility** | Event logs + deterministic slices mean any state can be replayed, audited, or debugged after the fact. |\n| **State mobility** | Memory is not tied to one runtime. Export, serialize, move between agents or machines, rehydrate anywhere. |\n\nMemory is no longer a local resource. It is portable belief.\n\n## Features\n\n**Memory graph:**\n- Full query algebra: `and`, `or`, `not`, `range`, `ids`, `scope_prefix`, `parents` (includes/count), `intent_id`, `task_id`, `meta` (dot-path), `meta_has`, `created` (time range), `decay` (freshness filter)\n- Multi-sort with tiebreakers (authority, conviction, importance, recency)\n- Configurable time decay: exponential, linear, or step -- applied at query time, not stored\n- Scored retrieval with pre/post filters, min_score threshold, and decay\n- Smart retrieval: contradiction-aware packing + diversity penalties + budget limits\n- Budget-aware retrieval (greedy knapsack by score/cost)\n- Provenance trees and minimal support sets (`getSupportTree`, `getSupportSet`)\n- Temporal sort and time-based importance decay\n- Bulk transforms with conditional update/retract (`applyMany`)\n- Conflict detection and resolution (`CONTRADICTS` / `SUPERSEDES`)\n- Staleness detection and cascade retraction\n- Identity resolution (transitive `ALIAS` groups)\n- Serialization (`toJSON` / `fromJSON` / `stringify` / `parse`)\n- Graph stats (counts by kind, author, scope, edge kind)\n- Event envelope wrapping for bus integration\n- Command log replay for state reconstruction\n\n**Intent graph:**\n- Status machine: active ↔ paused → completed / cancelled\n- Sub-intent hierarchies via `parent_id`\n- Query by owner, status, priority, parent, linked memory items\n- Invalid transitions throw typed errors\n\n**Task graph:**\n- Status machine: pending → running → completed / failed, with retry support (failed → running)\n- Subtask hierarchies via `parent_id`\n- Links to parent intent, input/output memory items, agent assignment\n- Query by intent, action, status, agent, parent, linked memory items\n\n**Transplant (export / import):**\n- Export a self-contained slice by walking provenance chains, aliases, related intents/tasks\n- Import into another graph instance — default: skip existing ids, append-only\n- Optional shallow compare to detect conflicts, optional re-id to mint new ids on conflict\n- JSON-serializable slices for migration, sub-agent isolation, cloning, and backup\n\n**Validation (optional, requires `zod \u003e= 4`):**\n- Zod schemas for every exported type — `MemoryItemSchema`, `EdgeSchema`, `IntentSchema`, `TaskSchema`, commands, filters, and more\n- Schemas are type-wired to the source interfaces via `z.ZodType\u003cT\u003e` — if a type changes, the schema must be updated or the build fails\n- Available as a separate entry point: `import { MemoryItemSchema } from \"@ai2070/memex/schemas\"`\n\n## Multi-Agent \u0026 Crew Orchestration\n\nMemEX supports multi-agent systems where each agent works on a segment of the graph. No separate memory stores per agent — one graph, segmented by conventions.\n\n### Soft isolation (shared graph, scoped views)\n\nEach agent reads and writes to the shared graph, filtered by `meta.agent_id` and `scope`:\n\n```ts\n// agent:researcher only sees its own observations\nconst myMemories = getItems(state, {\n  meta: { agent_id: \"agent:researcher\" },\n});\n\n// agent:analyst sees everything in a project scope\nconst projectMemories = getItems(state, {\n  scope_prefix: \"project:cyberdeck/\",\n});\n\n// orchestrator sees all agents' work, ranked by importance\nconst ranked = getScoredItems(state,\n  { authority: 0.5, importance: 0.5 },\n  { pre: { scope_prefix: \"project:cyberdeck/\" } },\n);\n```\n\nAgents write with their own `author` and `meta.agent_id`. The orchestrator can query across all agents, compare their findings, and resolve contradictions.\n\n### Hard isolation (exported slices)\n\nFor risky operations or external sandboxes, export a slice for the sub-agent to work on independently:\n\n```ts\n// give the sub-agent a slice of the graph\nconst slice = exportSlice(memState, intentState, taskState, {\n  memory_ids: relevantIds,\n  include_parents: true,\n  include_related_tasks: true,\n});\n\n// sub-agent works on its own copy...\n// ...then merge results back\nconst { memState: updated, report } = importSlice(\n  memState, intentState, taskState,\n  subAgentSlice,\n);\n// report.created -\u003e what the sub-agent added\n// report.updated -\u003e what was merged into existing items\n// existing items untouched by default (append-only)\n```\n\n### Crew patterns\n\n| Pattern | How |\n|---------|-----|\n| Shared workspace | All agents write to the same scope, filter by `meta.agent_id` to see own work |\n| Pipeline | Agent A's `output_memory_ids` on a task become agent B's `input_memory_ids` |\n| Review | Agent B reads agent A's items, creates `SUPPORTS` / `CONTRADICTS` edges |\n| Delegation | Orchestrator creates an intent, assigns tasks to specific agents via `task.agent_id` |\n| Sandbox | Export slice → sub-agent mutates copy → import results back |\n\n### What the `author` and `meta` fields enable\n\n```ts\n// who wrote this?\nitem.author                     // \"agent:researcher\"\n\n// which agent instance?\nitem.meta.agent_id              // \"agent:researcher-v2\"\n\n// which session?\nitem.meta.session_id            // \"session-abc\"\n\n// which crew run?\nitem.meta.crew_id               // \"crew:investigation-42\"\n\n// which intent spawned this?\nitem.intent_id                  // \"i1\"\n\n// which task produced this?\nitem.task_id                    // \"t1\"\n```\n\nAll of these are queryable via `meta` and `meta_has` filters. The graph is one shared structure; segmentation is just queries.\n\n## Dynamic Resolution\n\nMemEX supports different levels of detail at every stage of the memory lifecycle:\n\n| Stage | Low resolution | High resolution |\n|-------|---------------|----------------|\n| **Retrieval** | High-authority items only, no inferred, fast | Include hypotheses, simulations, full provenance chains |\n| **Thinking** | Direct facts + deterministic derivations | Multi-hop reasoning, contradiction surfacing, support tree traversal |\n| **Insertion** | Store summaries, mark details as low-importance | Store atomic events with full `DERIVED_FROM` chains |\n\nResolution is controlled through the same primitives -- filters, score weights, and decay:\n\n```ts\n// low resolution: only trusted, recent items\ngetItems(state, {\n  range: { authority: { min: 0.7 } },\n  not: { or: [{ kind: \"hypothesis\" }, { kind: \"simulation\" }] },\n  decay: { config: { rate: 0.3, interval: \"day\", type: \"exponential\" }, min: 0.5 },\n});\n\n// high resolution: everything, scored and ranked\nsmartRetrieve(state, {\n  budget: 8192,\n  costFn: (item) =\u003e JSON.stringify(item.content).length,\n  weights: { authority: 0.3, conviction: 0.3, importance: 0.4 },\n  contradictions: \"surface\",\n});\n```\n\nThe agent decides resolution based on the task. A routine action uses low resolution. A decision with consequences uses high resolution. The same graph serves both -- no separate \"fast\" and \"deep\" memory stores.\n\n### Thinking Budget from Scores\n\nThe three scores can drive the thinking budget itself. Items that are important but uncertain deserve more processing. Items that have been processed should have their importance reduced.\n\n```text\nthinking_priority = importance * (1 - authority)\n```\n\nAn item with `importance: 0.9` and `authority: 0.3` gets priority `0.63` -- high attention, uncertain, worth reasoning about. An item with `importance: 0.9` and `authority: 0.95` gets priority `0.045` -- important but already trusted, just use it.\n\nAfter the agent processes an item, reduce its importance:\n\n```ts\napplyCommand(state, {\n  type: \"memory.update\",\n  item_id: processedItem.id,\n  partial: { importance: processedItem.importance * 0.3 },\n  author: \"system:thinker\",\n  reason: \"processed\",\n});\n```\n\nThis creates a natural attention cycle: new items arrive with high importance, get processed, importance drops, and they fade into long-term memory unless re-activated. Items that are never processed accumulate and eventually surface through importance-weighted queries.\n\nThe cognition layer above can use `getScoredItems` with importance-heavy weights to build its thinking queue, and `decayImportance` to age out items that were never worth processing.\n\n## Choosing Parameters\n\nThe library provides knobs. Here's how to think about turning them.\n\n### Decay\n\n| Scenario | Recommendation |\n|----------|---------------|\n| Chat context, ephemeral state | Fast decay: `{ rate: 0.3, interval: \"hour\", type: \"linear\" }` |\n| Project knowledge, working memory | Moderate decay: `{ rate: 0.1, interval: \"day\", type: \"exponential\" }` |\n| Policies, traits, identity | No decay — these don't become less true over time |\n| Mixed graph | Use the `decay` filter to exclude stale items, but don't decay items with `kind: \"policy\"` or `kind: \"trait\"` — filter them in separately with `or` |\n\n### Diversity penalties\n\n| Scenario | Recommendation |\n|----------|---------------|\n| Exploration (\"what do we know?\") | High `author_penalty` (0.3-0.5) — spread across sources |\n| Verification (\"is this true?\") | Low or zero `author_penalty` — you *want* correlated evidence |\n| Summarization | Moderate `parent_penalty` (0.2-0.3) — avoid redundant derivations |\n| Debugging / audit | Zero penalties — show everything |\n\n### Score weights\n\n| Scenario | Weights |\n|----------|---------|\n| High-trust retrieval | `{ authority: 0.8, importance: 0.2 }` |\n| Attention-driven (what needs processing?) | `{ importance: 0.8, authority: 0.2 }` |\n| Agent self-evaluation | `{ conviction: 0.5, authority: 0.5 }` |\n| Balanced | `{ authority: 0.4, conviction: 0.3, importance: 0.3 }` |\n\n### Contradiction handling\n\n| Scenario | Mode |\n|----------|------|\n| User-facing context (clean, no confusion) | `contradictions: \"filter\"` |\n| Agent reasoning (needs to see disagreement) | `contradictions: \"surface\"` |\n| Audit / debugging | Neither — use `getContradictions()` directly |\n\nThese are starting points, not prescriptions. Calibrate based on your use case.\n\nSee [API.md](./API.md) for the full API reference.\n\n## License\n\nApache 2.0 -- see [LICENSE](./LICENSE).\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fai-2070%2Fmemex","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fai-2070%2Fmemex","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fai-2070%2Fmemex/lists"}