{"id":48675655,"url":"https://github.com/Adaimade/R-Mem","last_synced_at":"2026-04-26T10:00:30.554Z","repository":{"id":349891071,"uuid":"1204380987","full_name":"Adaimade/R-Mem","owner":"Adaimade","description":"Lightweight Rust alternative to mem0. 3.2MB binary, SQLite        vector + graph memory, Ollama support.","archived":false,"fork":false,"pushed_at":"2026-04-08T02:57:37.000Z","size":60,"stargazers_count":1,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-04-08T03:08:43.005Z","etag":null,"topics":["ai-agent","llm","mem0","memory","ollama","rust","sqlite","vector-database"],"latest_commit_sha":null,"homepage":"","language":"Rust","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Adaimade.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-04-08T00:44:24.000Z","updated_at":"2026-04-08T02:57:40.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/Adaimade/R-Mem","commit_stats":null,"previous_names":["adaimade/r-mem"],"tags_count":null,"template":false,"template_full_name":null,"purl":"pkg:github/Adaimade/R-Mem","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Adaimade%2FR-Mem","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Adaimade%2FR-Mem/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Adaimade%2FR-Mem/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Adaimade%2FR-Mem/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Adaimade","download_url":"https://codeload.github.com/Adaimade/R-Mem/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Adaimade%2FR-Mem/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":32292958,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-26T09:34:17.070Z","status":"ssl_error","status_checked_at":"2026-04-26T09:34:00.993Z","response_time":129,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai-agent","llm","mem0","memory","ollama","rust","sqlite","vector-database"],"created_at":"2026-04-10T15:00:19.934Z","updated_at":"2026-04-26T10:00:30.543Z","avatar_url":"https://github.com/Adaimade.png","language":"Rust","funding_links":[],"categories":["Applications"],"sub_categories":["Desktop"],"readme":"\u003cdiv align=\"center\"\u003e\n\n# R-Mem\n\n### Long-term memory for AI agents — in Rust\n\n**A lightweight study of [mem0](https://github.com/mem0ai/mem0)'s memory architecture.**\u003cbr\u003e\n**Single binary. SQLite-backed. No Python.**\n\n[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)\n[![Rust](https://img.shields.io/badge/Rust-1.75+-orange.svg)](https://www.rust-lang.org/)\n[![Crates.io](https://img.shields.io/crates/v/rustmem.svg)](https://crates.io/crates/rustmem)\n[![Built with Claude Code](https://img.shields.io/badge/Built%20with-Claude%20Code-blueviolet)](https://claude.ai)\n[![Awesome SQLite](https://img.shields.io/badge/Awesome-SQLite-green.svg)](https://github.com/planetopendata/awesome-sqlite)\n\n**3.6 MB binary** · **2,826 lines of Rust** · **\u003c 10 MB RAM** · **SQLite only** · **MCP ready** · **LongMemEval 48.2%**\n\n[Quick Start](#-quick-start) · [Integration](#-integration-guide) · [How It Works](#-how-it-works) · [Usage](#-usage) · [MCP](#-mcp-server) · [Performance](#-performance) · [Architecture](#-architecture) · [Roadmap](#-roadmap)\n\n🌐 [繁體中文](docs/README.zh-TW.md) · [简体中文](docs/README.zh-CN.md) · [日本語](docs/README.ja.md) · [한국어](docs/README.ko.md)\n\n\u003c/div\u003e\n\n\u003e [!NOTE]\n\u003e This project reimplements [mem0](https://github.com/mem0ai/mem0)'s elegant memory architecture in Rust as a learning exercise. Full credit to the mem0 team for the original design. This is not a replacement — it's a study of their approach using a different language. Discussions, ideas, and contributions are welcome!\n\n---\n\n## Why R-Mem?\n\nmem0 is a well-designed memory system with a rich plugin ecosystem. R-Mem asks a narrower question: *what if we rewrite just the core memory logic in Rust, backed entirely by SQLite?*\n\nThe result is the same three-tier architecture — **vector memory**, **graph memory**, **history** — plus a **tiered archive** system, in **2,826 lines of Rust**. No external services. One binary. The trade-off is clear: far fewer integrations, but near-zero operational overhead.\n\nR-Mem was born out of [RustClaw](https://github.com/Adaimade/RustClaw) — our minimalist Rust AI agent framework. RustClaw needed a memory layer that matched its philosophy: single binary, zero external services. So we studied mem0's architecture and rebuilt it in Rust.\n\n\u003ctable\u003e\n\u003ctr\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e\u003cstrong\u003eR-Mem\u003c/strong\u003e\u003c/td\u003e\u003ctd\u003e\u003cstrong\u003emem0\u003c/strong\u003e\u003c/td\u003e\u003c/tr\u003e\n\u003ctr\u003e\u003ctd\u003e📦 Binary\u003c/td\u003e\u003ctd\u003e3.6 MB static\u003c/td\u003e\u003ctd\u003ePython + pip (rich ecosystem)\u003c/td\u003e\u003c/tr\u003e\n\u003ctr\u003e\u003ctd\u003e💾 Idle RSS\u003c/td\u003e\u003ctd\u003e\u0026lt; 10 MB\u003c/td\u003e\u003ctd\u003e200 MB+ (more features loaded)\u003c/td\u003e\u003c/tr\u003e\n\u003ctr\u003e\u003ctd\u003e📝 Code\u003c/td\u003e\u003ctd\u003e2,826 lines\u003c/td\u003e\u003ctd\u003e~91,500 lines (26+ store drivers)\u003c/td\u003e\u003c/tr\u003e\n\u003ctr\u003e\u003ctd\u003e🔍 Vector\u003c/td\u003e\u003ctd\u003eSQLite + FTS5\u003c/td\u003e\u003ctd\u003eQdrant, Chroma, Pinecone, …\u003c/td\u003e\u003c/tr\u003e\n\u003ctr\u003e\u003ctd\u003e🕸️ Graph\u003c/td\u003e\u003ctd\u003eSQLite only\u003c/td\u003e\u003ctd\u003eNeo4j / Memgraph\u003c/td\u003e\u003c/tr\u003e\n\u003ctr\u003e\u003ctd\u003e🤖 LLM\u003c/td\u003e\u003ctd\u003eOpenAI, Anthropic, Ollama\u003c/td\u003e\u003ctd\u003eOpenAI, Anthropic, and more\u003c/td\u003e\u003c/tr\u003e\n\u003ctr\u003e\u003ctd\u003e🗄️ Archive\u003c/td\u003e\u003ctd\u003eTiered memory with fallback\u003c/td\u003e\u003ctd\u003e—\u003c/td\u003e\u003c/tr\u003e\n\u003c/table\u003e\n\n\u003e mem0's numbers reflect its richer ecosystem — more stores, more integrations, more flexibility. R-Mem intentionally trades that for a minimal footprint.\n\n### What R-Mem adds beyond mem0\n\n| Feature | R-Mem | mem0 |\n|---|---|---|\n| **Tiered Archive** | Deleted/updated memories preserved + fallback search | Gone when deleted |\n| **FTS5 Pre-filter** | Two-stage search: keyword → vector (19x faster) | Vector-only |\n| **MCP Server** | Built-in, `rustmem mcp` for Claude Code / Cursor | Not available |\n| **Zero-dependency deploy** | Single binary, SQLite, no Docker | Python + pip + vector DB + graph DB |\n| **Anthropic native** | Direct Claude API support | Via OpenAI-compatible proxy |\n| **Configurable pipeline** | `[memory]` section: thresholds, limits, all tunable | Hardcoded defaults |\n| **Memory categories** | Auto-classified: preference, personal, plan, professional, health | Unstructured |\n\n---\n\n## 🔍 How It Works\n\n```\nInput text\n│\n├─ 📦 Vector Memory ──────────────────────────────────\n│    │\n│    ├─ LLM extracts facts\n│    │    → [\"Name is Alice\", \"Works at Google\"]\n│    │\n│    ├─ Embedding → cosine similarity search\n│    │    (FTS5 pre-filter + vector ranking)\n│    │\n│    ├─ Integer ID mapping\n│    │    (prevents LLM UUID hallucination)\n│    │\n│    ├─ LLM decides per fact:\n│    │    ├─ ADD       new information\n│    │    ├─ UPDATE    more specific → old version archived\n│    │    ├─ DELETE    contradiction → old version archived\n│    │    └─ NONE      duplicate — skip\n│    │\n│    └─ Execute actions + write history\n│\n├─ 🕸️ Graph Memory ──────────────────────────────────\n│    │\n│    ├─ LLM extracts entities + relations\n│    ├─ Conflict detection (soft-delete old, add new)\n│    └─ Multi-value vs single-value handling\n│\n└─ 🗄️ Archive ───────────────────────────────────────\n     │\n     ├─ Deleted/superseded memories preserved with embeddings\n     ├─ Fallback search when active results are weak\n     └─ Auto-compaction when archive exceeds threshold\n```\n\n---\n\n## 🚀 Quick Start\n\n### Prerequisites\n\n| Requirement | Install |\n|---|---|\n| Rust 1.75+ | `curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs \\| sh` |\n| LLM backend | [Ollama](https://ollama.com), [OpenAI](https://platform.openai.com), or [Anthropic](https://console.anthropic.com) |\n\n### Install\n\n```bash\ncargo install rustmem\n```\n\nOr build from source:\n\n```bash\ngit clone https://github.com/Adaimade/R-Mem.git \u0026\u0026 cd R-Mem\ncargo build --release\n# → target/release/rustmem (3.6 MB)\n```\n\n### Configure\n\nCreate `rustmem.toml` in the project root:\n\n\u003ctable\u003e\n\u003ctr\u003e\n\u003ctd\u003e\u003cstrong\u003eOllama (local)\u003c/strong\u003e\u003c/td\u003e\n\u003ctd\u003e\u003cstrong\u003eOpenAI\u003c/strong\u003e\u003c/td\u003e\n\u003ctd\u003e\u003cstrong\u003eAnthropic\u003c/strong\u003e\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003e\n\n```toml\n[llm]\nprovider = \"openai\"\nbase_url = \"http://127.0.0.1:11434\"\nmodel = \"qwen2.5:32b\"\n\n[embedding]\nprovider = \"openai\"\nbase_url = \"http://127.0.0.1:11434\"\nmodel = \"nomic-embed-text\"\n```\n\n\u003c/td\u003e\n\u003ctd\u003e\n\n```toml\n[llm]\nprovider = \"openai\"\napi_key = \"sk-...\"\nmodel = \"gpt-4o\"\n\n[embedding]\nprovider = \"openai\"\napi_key = \"sk-...\"\nmodel = \"text-embedding-3-small\"\n```\n\n\u003c/td\u003e\n\u003ctd\u003e\n\n```toml\n[llm]\nprovider = \"anthropic\"\napi_key = \"sk-ant-...\"\nmodel = \"claude-sonnet-4-6\"\n\n[embedding]\nprovider = \"openai\"\napi_key = \"sk-...\"\nmodel = \"text-embedding-3-small\"\n```\n\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003c/table\u003e\n\n\u003e **Note:** Anthropic does not provide embedding models, so `[embedding]` uses OpenAI or Ollama even when `[llm]` uses Anthropic.\n\n\u003e **Security:** R-Mem binds to `127.0.0.1` by default (localhost only). Never put API keys in code — use `rustmem.toml` (gitignored) or environment variables (`RUSTMEM__LLM__API_KEY`).\n\n---\n\n## 🔗 Integration Guide\n\n### ⚠️ Building a MemoryManager is not enough\n\nThe most common integration mistake: you initialize `MemoryManager`, but never call `add()` or `search()` in your conversation loop. The memory system exists but is never used — nothing the user says gets remembered.\n\n### The correct conversation loop\n\nEvery turn must include two memory operations:\n\n1. **Before LLM call — RECALL** (search relevant memories)\n2. **After LLM call — LEARN** (extract and store new facts)\n\n```\nloop {\n    user_message = receive()\n\n    // 1. RECALL — before calling the LLM\n    memories = rmem.search(user_id, user_message, limit=10)\n    context = format_as_context(memories)\n\n    // 2. Call LLM with memory context\n    response = llm.chat(system_prompt + context + user_message)\n\n    // 3. LEARN — after responding\n    rmem.add(user_id, user_message)\n\n    send(response)\n}\n```\n\n### Memory context format\n\nFormat `search()` results as context the LLM can understand:\n\n```\n[Memory]\nKnown facts about this user:\n- User's name is Alice\n- User prefers dark mode\n- User is working on a Rust project\n```\n\nPlace this in the system prompt or before the user message so the LLM can reference it.\n\n### Multi-scope pattern\n\nIf your app serves multiple channels (e.g. Telegram + Discord), use three scope layers:\n\n| Scope | Purpose | Example ID |\n|---|---|---|\n| local | Single conversation / group | `telegram:group_123` |\n| user | Cross-channel personal memory | `user:456` |\n| global | Shared across all users | `global:system` |\n\nMerge results at recall time:\n\n```\nlocal_results  = search(\"telegram:group_123\", query)\nuser_results   = search(\"user:456\", query)\nglobal_results = search(\"global:system\", query)\nall = deduplicate(local + user + global)\n```\n\n### Common mistakes\n\n- ❌ Initialize MemoryManager but never call `search()` / `add()` in the loop\n- ❌ Only LEARN without RECALL (memories stored but never retrieved)\n- ❌ Only RECALL without LEARN (reads old memories but never learns new ones)\n- ❌ Put `add()` before the LLM call (current message gets treated as known fact)\n\n---\n\n## 📖 Usage\n\n### CLI\n\n```bash\n# Add memories\nrustmem add -u alice \"My name is Alice and I work at Google. I love sushi.\"\n\n# Semantic search\nrustmem search -u alice \"What does Alice eat?\"\n\n# List all memories for a user\nrustmem list -u alice\n\n# Show graph relations\nrustmem graph -u alice\n\n# Start REST API server\nrustmem server\n```\n\n### REST API\n\nStart with `rustmem server`, then:\n\n```bash\n# ➕ Add memory\ncurl -X POST http://localhost:8019/memories/add \\\n  -H 'Content-Type: application/json' \\\n  -d '{\"user_id\": \"alice\", \"text\": \"I moved to Tokyo last month\"}'\n\n# 🔍 Search\ncurl -X POST http://localhost:8019/memories/search \\\n  -H 'Content-Type: application/json' \\\n  -d '{\"user_id\": \"alice\", \"query\": \"where does she live\", \"limit\": 5}'\n\n# 📋 List all\ncurl http://localhost:8019/memories?user_id=alice\n\n# 🏷️ Filter by category (preference, personal, plan, professional, health, misc)\ncurl http://localhost:8019/memories?user_id=alice\u0026category=preference\n\n# 🗑️ Delete\ncurl -X DELETE http://localhost:8019/memories/{id}\n\n# 📜 History\ncurl http://localhost:8019/memories/{id}/history\n\n# 🗄️ View archived memories\ncurl http://localhost:8019/archive?user_id=alice\n\n# 🕸️ View graph relations\ncurl http://localhost:8019/graph?user_id=alice\n```\n\n### Drop-in for AI Agents\n\n```python\n# mem0 (before)\nfrom mem0 import Memory\nm = Memory()\nm.add(\"Alice loves sushi\", user_id=\"alice\")\n\n# R-Mem (after — just switch to HTTP)\nimport httpx\nhttpx.post(\"http://localhost:8019/memories/add\",\n    json={\"user_id\": \"alice\", \"text\": \"Alice loves sushi\"})\n```\n\n---\n\n## 🔌 MCP Server\n\nR-Mem works as an MCP server — give Claude Code or Cursor long-term memory with one command:\n\n```bash\n# Claude Code\nclaude mcp add rustmem -- /path/to/rustmem mcp\n\n# Cursor (.cursor/mcp.json)\n{\n  \"mcpServers\": {\n    \"rustmem\": {\n      \"command\": \"/path/to/rustmem\",\n      \"args\": [\"mcp\"]\n    }\n  }\n}\n```\n\n**7 tools available:** `add_memory`, `search_memory`, `list_memories`, `get_memory`, `delete_memory`, `get_graph`, `reset_memories`\n\n---\n\n## ⚡ Performance\n\nBenchmarked on Apple Silicon with 10,000 memories (768-dim embeddings):\n\n| Operation | Time | Notes |\n|---|---|---|\n| **Write** | 36 µs/record | 10K records in 360ms |\n| **Brute-force search** | 35.8 ms | Scans all 10K embeddings |\n| **FTS5 + vector search** | **1.9 ms** | **19x faster** — pre-filters then re-ranks |\n| **Concurrent reads** | 2.4 ms/thread | 10 threads, WAL mode, no blocking |\n| **Storage** | 4.2 KB/memory | 10K memories = 40 MB |\n\nRun the benchmark yourself:\n\n```bash\ncargo bench --bench store_bench\n```\n\n### LongMemEval\n\n[LongMemEval](https://github.com/xiaowu0162/LongMemEval) (ICLR 2025) — 500 questions testing long-term memory across 5 capabilities:\n\n| System | Score | Notes |\n|---|---|---|\n| agentmemory | 96.2% | RAG (stores raw text) |\n| MemLayer | 94.4% | RAG (layered index) |\n| Zep | 63.8% | RAG + summary |\n| mem0 | ~49% | Fact extraction (gpt-4o) |\n| **R-Mem** | **48.2%** | **Fact extraction (gpt-4o-mini)** |\n\n\u003e R-Mem nearly matches mem0 using a 20x cheaper model. The gap vs RAG systems is architectural — R-Mem extracts and deduplicates facts rather than storing raw text, which trades verbatim recall for efficient long-term knowledge management.\n\n---\n\n## 🏗️ Architecture\n\n```\nsrc/\n├── main.rs          CLI entry point (clap)\n├── config.rs        TOML + env var config\n├── server.rs        REST API (axum)\n├── mcp.rs           MCP server (rmcp) — 7 tools over stdio\n├── memory.rs        Core orchestrator — tiered memory pipeline\n├── extract.rs       LLM calls: OpenAI + Anthropic native\n├── embedding.rs     OpenAI-compatible embedding client\n├── store.rs         SQLite vector store (WAL + FTS5 + archive)\n└── graph.rs         SQLite graph store (soft-delete, multi-value)\n```\n\n**9 files. 2,826 lines. 3.6 MB binary. Zero external services.**\n\n---\n\n## 🗺️ Roadmap\n\n| Status | Feature | Description |\n|---|---|---|\n| ✅ | **Published on crates.io** | `cargo install rustmem` — one-line install |\n| ✅ | **MCP Server** | `rustmem mcp` — 7 tools over stdio for Claude Code / Cursor |\n| ✅ | **Tiered Archive** | Deleted/updated memories preserved + fallback search |\n| ✅ | **FTS5 Two-Stage Search** | Keyword pre-filter + vector re-rank — 19x faster |\n| ✅ | **Memory Categories** | Auto-classified: preference, personal, plan, professional, health |\n| ✅ | **Anthropic Native** | Direct Claude API support (no proxy needed) |\n| ✅ | **Agent SDK (lib crate)** | Use `rustmem::{memory, store, graph}` directly in your Rust code |\n| ✅ | **LongMemEval Benchmark** | 48.2% with gpt-4o-mini, nearly matching mem0 (~49%) |\n| ✅ | **Production Audit** | 11 security/stability fixes, 25 unit tests, cargo bench |\n| 🔲 | **Episodic Memory** | Task execution history (tool calls, params, results) |\n| 🔲 | **User Preference Model** | Cross-session user style and behavior modeling |\n| 🔲 | **Skill Abstraction** | Auto-extract repeated successful patterns into skills |\n| 🔲 | **Batch Import** | Load existing mem0 exports |\n| 🔲 | **Multi-modal** | Image / audio memory support |\n| 🔲 | **Dashboard** | Lightweight web UI for memory inspection |\n\nR-Mem v0.3.0 is feature-complete as a learning project. The core architecture is stable and production-hardened. Community contributions, forks, and explorations are welcome — open an issue or PR.\n\n---\n\n\u003cdiv align=\"center\"\u003e\n\n**MIT License** · v0.3.0\n\nCreated by [Ad Huang](https://github.com/Adaimade) with [Claude Code](https://claude.ai)\n\n\u003c/div\u003e\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FAdaimade%2FR-Mem","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FAdaimade%2FR-Mem","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FAdaimade%2FR-Mem/lists"}