{"id":47300604,"url":"https://github.com/CortexReach/memory-lancedb-pro","last_synced_at":"2026-03-31T06:00:49.855Z","repository":{"id":340480273,"uuid":"1165507389","full_name":"CortexReach/memory-lancedb-pro","owner":"CortexReach","description":"Enhanced LanceDB memory plugin for OpenClaw — Hybrid Retrieval (Vector + BM25), Cross-Encoder Rerank, Multi-Scope Isolation, Management CLI","archived":false,"fork":false,"pushed_at":"2026-03-23T19:27:47.000Z","size":2084,"stargazers_count":3420,"open_issues_count":70,"forks_count":551,"subscribers_count":12,"default_branch":"master","last_synced_at":"2026-03-23T21:13:54.473Z","etag":null,"topics":["lancedb","memory","openclaw","openclaw-agent","openclaw-plugin","rag"],"latest_commit_sha":null,"homepage":"https://youtu.be/bhuGrjuCM_g","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/CortexReach.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG-v1.1.0.md","contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-02-24T08:38:15.000Z","updated_at":"2026-03-23T19:27:51.000Z","dependencies_parsed_at":"2026-03-01T12:03:41.935Z","dependency_job_id":null,"html_url":"https://github.com/CortexReach/memory-lancedb-pro","commit_stats":null,"previous_names":["win4r/memory-lancedb-pro","cortexreach/memory-lancedb-pro"],"tags_count":39,"template":false,"template_full_name":null,"purl":"pkg:github/CortexReach/memory-lancedb-pro","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CortexReach%2Fmemory-lancedb-pro","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CortexReach%2Fmemory-lancedb-pro/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CortexReach%2Fmemory-lancedb-pro/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CortexReach%2Fmemory-lancedb-pro/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/CortexReach","download_url":"https://codeload.github.com/CortexReach/memory-lancedb-pro/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CortexReach%2Fmemory-lancedb-pro/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31223291,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-31T04:08:55.938Z","status":"ssl_error","status_checked_at":"2026-03-31T04:08:47.883Z","response_time":111,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["lancedb","memory","openclaw","openclaw-agent","openclaw-plugin","rag"],"created_at":"2026-03-17T01:38:19.998Z","updated_at":"2026-03-31T06:00:49.841Z","avatar_url":"https://github.com/CortexReach.png","language":"TypeScript","readme":"\u003cdiv align=\"center\"\u003e\n\n# 🧠 memory-lancedb-pro · 🦞OpenClaw Plugin\n\n**AI Memory Assistant for [OpenClaw](https://github.com/openclaw/openclaw) Agents**\n\n*Give your AI agent a brain that actually remembers — across sessions, across agents, across time.*\n\nA LanceDB-backed OpenClaw memory plugin that stores preferences, decisions, and project context, then auto-recalls them in future sessions.\n\n[![OpenClaw Plugin](https://img.shields.io/badge/OpenClaw-Plugin-blue)](https://github.com/openclaw/openclaw)\n[![OpenClaw 2026.3+](https://img.shields.io/badge/OpenClaw-2026.3%2B-brightgreen)](https://github.com/openclaw/openclaw)\n[![npm version](https://img.shields.io/npm/v/memory-lancedb-pro)](https://www.npmjs.com/package/memory-lancedb-pro)\n[![LanceDB](https://img.shields.io/badge/LanceDB-Vectorstore-orange)](https://lancedb.com)\n[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](LICENSE)\n\n\u003ch2\u003e⚡ \u003ca href=\"https://github.com/CortexReach/memory-lancedb-pro/releases/tag/v1.1.0-beta.10\"\u003ev1.1.0-beta.10 — OpenClaw 2026.3+ Hook Adaptation\u003c/a\u003e\u003c/h2\u003e\n\n\u003cp\u003e\n  ✅ Fully adapted for OpenClaw 2026.3+ new plugin architecture\u003cbr\u003e\n  🔄 Uses \u003ccode\u003ebefore_prompt_build\u003c/code\u003e hooks (replacing deprecated \u003ccode\u003ebefore_agent_start\u003c/code\u003e)\u003cbr\u003e\n  🩺 Run \u003ccode\u003eopenclaw doctor --fix\u003c/code\u003e after upgrading\n\u003c/p\u003e\n\n[English](README.md) | [简体中文](README_CN.md) | [繁體中文](README_TW.md) | [日本語](README_JA.md) | [한국어](README_KO.md) | [Français](README_FR.md) | [Español](README_ES.md) | [Deutsch](README_DE.md) | [Italiano](README_IT.md) | [Русский](README_RU.md) | [Português (Brasil)](README_PT-BR.md)\n\n\u003c/div\u003e\n\n---\n\n## Why memory-lancedb-pro?\n\nMost AI agents have amnesia. They forget everything the moment you start a new chat.\n\n**memory-lancedb-pro** is a production-grade long-term memory plugin for OpenClaw that turns your agent into an **AI Memory Assistant** — it automatically captures what matters, lets noise naturally fade, and retrieves the right memory at the right time. No manual tagging, no configuration headaches.\n\n### Your AI Memory Assistant in Action\n\n**Without memory — every session starts from zero:**\n\n\u003e **You:** \"Use tabs for indentation, always add error handling.\"\n\u003e *(next session)*\n\u003e **You:** \"I already told you — tabs, not spaces!\" 😤\n\u003e *(next session)*\n\u003e **You:** \"...seriously, tabs. And error handling. Again.\"\n\n**With memory-lancedb-pro — your agent learns and remembers:**\n\n\u003e **You:** \"Use tabs for indentation, always add error handling.\"\n\u003e *(next session — agent auto-recalls your preferences)*\n\u003e **Agent:** *(silently applies tabs + error handling)* ✅\n\u003e **You:** \"Why did we pick PostgreSQL over MongoDB last month?\"\n\u003e **Agent:** \"Based on our discussion on Feb 12, the main reasons were...\" ✅\n\nThat's the difference an **AI Memory Assistant** makes — it learns your style, recalls past decisions, and delivers personalized responses without you repeating yourself.\n\n### What else can it do?\n\n| | What you get |\n|---|---|\n| **Auto-Capture** | Your agent learns from every conversation — no manual `memory_store` needed |\n| **Smart Extraction** | LLM-powered 6-category classification: profiles, preferences, entities, events, cases, patterns |\n| **Intelligent Forgetting** | Weibull decay model — important memories stay, noise naturally fades away |\n| **Hybrid Retrieval** | Vector + BM25 full-text search, fused with cross-encoder reranking |\n| **Context Injection** | Relevant memories automatically surface before each reply |\n| **Multi-Scope Isolation** | Per-agent, per-user, per-project memory boundaries |\n| **Any Provider** | OpenAI, Jina, Gemini, Ollama, or any OpenAI-compatible API |\n| **Full Toolkit** | CLI, backup, migration, upgrade, export/import — production-ready |\n\n---\n\n## Quick Start\n\n### Option A: One-Click Install Script (Recommended)\n\nThe community-maintained **[setup script](https://github.com/CortexReach/toolbox/tree/main/memory-lancedb-pro-setup)** handles install, upgrade, and repair in one command:\n\n```bash\ncurl -fsSL https://raw.githubusercontent.com/CortexReach/toolbox/main/memory-lancedb-pro-setup/setup-memory.sh -o setup-memory.sh\nbash setup-memory.sh\n```\n\n\u003e See [Ecosystem](#ecosystem) below for the full list of scenarios the script covers and other community tools.\n\n### Option B: Manual Install\n\n**Via OpenClaw CLI (recommended):**\n```bash\nopenclaw plugins install memory-lancedb-pro@beta\n```\n\n**Or via npm:**\n```bash\nnpm i memory-lancedb-pro@beta\n```\n\u003e If using npm, you will also need to add the plugin's install directory as an **absolute** path in `plugins.load.paths` in your `openclaw.json`. This is the most common setup issue.\n\nAdd to your `openclaw.json`:\n\n```json\n{\n  \"plugins\": {\n    \"slots\": { \"memory\": \"memory-lancedb-pro\" },\n    \"entries\": {\n      \"memory-lancedb-pro\": {\n        \"enabled\": true,\n        \"config\": {\n          \"embedding\": {\n            \"provider\": \"openai-compatible\",\n            \"apiKey\": \"${OPENAI_API_KEY}\",\n            \"model\": \"text-embedding-3-small\"\n          },\n          \"autoCapture\": true,\n          \"autoRecall\": true,\n          \"smartExtraction\": true,\n          \"extractMinMessages\": 2,\n          \"extractMaxChars\": 8000,\n          \"sessionMemory\": { \"enabled\": false }\n        }\n      }\n    }\n  }\n}\n```\n\n**Why these defaults?**\n- `autoCapture` + `smartExtraction` → your agent learns from every conversation automatically\n- `autoRecall` → relevant memories are injected before each reply\n- `extractMinMessages: 2` → extraction triggers in normal two-turn chats\n- `sessionMemory.enabled: false` → avoids polluting retrieval with session summaries on day one\n\nValidate \u0026 restart:\n\n```bash\nopenclaw config validate\nopenclaw gateway restart\nopenclaw logs --follow --plain | grep \"memory-lancedb-pro\"\n```\n\nYou should see:\n- `memory-lancedb-pro: smart extraction enabled`\n- `memory-lancedb-pro@...: plugin registered`\n\nDone! Your agent now has long-term memory.\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eMore installation paths (existing users, upgrades)\u003c/strong\u003e\u003c/summary\u003e\n\n**Already using OpenClaw?**\n\n1. Add the plugin with an **absolute** `plugins.load.paths` entry\n2. Bind the memory slot: `plugins.slots.memory = \"memory-lancedb-pro\"`\n3. Verify: `openclaw plugins info memory-lancedb-pro \u0026\u0026 openclaw memory-pro stats`\n\n**Upgrading from pre-v1.1.0?**\n\n```bash\n# 1) Backup\nopenclaw memory-pro export --scope global --output memories-backup.json\n# 2) Dry run\nopenclaw memory-pro upgrade --dry-run\n# 3) Run upgrade\nopenclaw memory-pro upgrade\n# 4) Verify\nopenclaw memory-pro stats\n```\n\nSee `CHANGELOG-v1.1.0.md` for behavior changes and upgrade rationale.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eTelegram Bot Quick Import (click to expand)\u003c/strong\u003e\u003c/summary\u003e\n\nIf you are using OpenClaw's Telegram integration, the easiest way is to send an import command directly to the main Bot instead of manually editing config.\n\nSend this message:\n\n```text\nHelp me connect this memory plugin with the most user-friendly configuration: https://github.com/CortexReach/memory-lancedb-pro\n\nRequirements:\n1. Set it as the only active memory plugin\n2. Use Jina for embedding\n3. Use Jina for reranker\n4. Use gpt-4o-mini for the smart-extraction LLM\n5. Enable autoCapture, autoRecall, smartExtraction\n6. extractMinMessages=2\n7. sessionMemory.enabled=false\n8. captureAssistant=false\n9. retrieval mode=hybrid, vectorWeight=0.7, bm25Weight=0.3\n10. rerank=cross-encoder, candidatePoolSize=12, minScore=0.6, hardMinScore=0.62\n11. Generate the final openclaw.json config directly, not just an explanation\n```\n\n\u003c/details\u003e\n\n---\n\n## Ecosystem\n\nmemory-lancedb-pro is the core plugin. The community has built tools around it to make setup and daily use even smoother:\n\n### Setup Script — One-Click Install, Upgrade \u0026 Repair\n\n\u003e **[CortexReach/toolbox/memory-lancedb-pro-setup](https://github.com/CortexReach/toolbox/tree/main/memory-lancedb-pro-setup)**\n\nNot just a simple installer — the script intelligently handles a wide range of real-world scenarios:\n\n| Your situation | What the script does |\n|---|---|\n| Never installed | Fresh download → install deps → pick config → write to openclaw.json → restart |\n| Installed via `git clone`, stuck on old commit | Auto `git fetch` + `checkout` to latest → reinstall deps → verify |\n| Config has invalid fields | Auto-detect via schema filter, remove unsupported fields |\n| Installed via `npm` | Skips git update, reminds you to run `npm update` yourself |\n| `openclaw` CLI broken due to invalid config | Fallback: read workspace path directly from `openclaw.json` file |\n| `extensions/` instead of `plugins/` | Auto-detect plugin location from config or filesystem |\n| Already up to date | Run health checks only, no changes |\n\n```bash\nbash setup-memory.sh                    # Install or upgrade\nbash setup-memory.sh --dry-run          # Preview only\nbash setup-memory.sh --beta             # Include pre-release versions\nbash setup-memory.sh --uninstall        # Revert config and remove plugin\n```\n\nBuilt-in provider presets: **Jina / DashScope / SiliconFlow / OpenAI / Ollama**, or bring your own OpenAI-compatible API. For full usage (including `--ref`, `--selfcheck-only`, and more), see the [setup script README](https://github.com/CortexReach/toolbox/tree/main/memory-lancedb-pro-setup).\n\n### Claude Code / OpenClaw Skill — AI-Guided Configuration\n\n\u003e **[CortexReach/memory-lancedb-pro-skill](https://github.com/CortexReach/memory-lancedb-pro-skill)**\n\nInstall this skill and your AI agent (Claude Code or OpenClaw) gains deep knowledge of every feature in memory-lancedb-pro. Just say **\"help me enable the best config\"** and get:\n\n- **Guided 7-step configuration workflow** with 4 deployment plans:\n  - Full Power (Jina + OpenAI) / Budget (free SiliconFlow reranker) / Simple (OpenAI only) / Fully Local (Ollama, zero API cost)\n- **All 9 MCP tools** used correctly: `memory_recall`, `memory_store`, `memory_forget`, `memory_update`, `memory_stats`, `memory_list`, `self_improvement_log`, `self_improvement_extract_skill`, `self_improvement_review` *(full toolset requires `enableManagementTools: true` — the default Quick Start config exposes the 4 core tools)*\n- **Common pitfall avoidance**: workspace plugin enablement, `autoRecall` default-false, jiti cache, env vars, scope isolation, and more\n\n**Install for Claude Code:**\n```bash\ngit clone https://github.com/CortexReach/memory-lancedb-pro-skill.git ~/.claude/skills/memory-lancedb-pro\n```\n\n**Install for OpenClaw:**\n```bash\ngit clone https://github.com/CortexReach/memory-lancedb-pro-skill.git ~/.openclaw/workspace/skills/memory-lancedb-pro-skill\n```\n\n---\n\n## Video Tutorial\n\n\u003e Full walkthrough: installation, configuration, and hybrid retrieval internals.\n\n[![YouTube Video](https://img.shields.io/badge/YouTube-Watch%20Now-red?style=for-the-badge\u0026logo=youtube)](https://youtu.be/MtukF1C8epQ)\n**https://youtu.be/MtukF1C8epQ**\n\n[![Bilibili Video](https://img.shields.io/badge/Bilibili-Watch%20Now-00A1D6?style=for-the-badge\u0026logo=bilibili\u0026logoColor=white)](https://www.bilibili.com/video/BV1zUf2BGEgn/)\n**https://www.bilibili.com/video/BV1zUf2BGEgn/**\n\n---\n\n## Architecture\n\n```\n┌─────────────────────────────────────────────────────────┐\n│                   index.ts (Entry Point)                │\n│  Plugin Registration · Config Parsing · Lifecycle Hooks │\n└────────┬──────────┬──────────┬──────────┬───────────────┘\n         │          │          │          │\n    ┌────▼───┐ ┌────▼───┐ ┌───▼────┐ ┌──▼──────────┐\n    │ store  │ │embedder│ │retriever│ │   scopes    │\n    │ .ts    │ │ .ts    │ │ .ts    │ │    .ts      │\n    └────────┘ └────────┘ └────────┘ └─────────────┘\n         │                     │\n    ┌────▼───┐           ┌─────▼──────────┐\n    │migrate │           │noise-filter.ts │\n    │ .ts    │           │adaptive-       │\n    └────────┘           │retrieval.ts    │\n                         └────────────────┘\n    ┌─────────────┐   ┌──────────┐\n    │  tools.ts   │   │  cli.ts  │\n    │ (Agent API) │   │ (CLI)    │\n    └─────────────┘   └──────────┘\n```\n\n\u003e For a deep-dive into the full architecture, see [docs/memory_architecture_analysis.md](docs/memory_architecture_analysis.md).\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eFile Reference (click to expand)\u003c/strong\u003e\u003c/summary\u003e\n\n| File | Purpose |\n| --- | --- |\n| `index.ts` | Plugin entry point. Registers with OpenClaw Plugin API, parses config, mounts lifecycle hooks via `api.on()` and command hooks via `api.registerHook()` |\n| `openclaw.plugin.json` | Plugin metadata + full JSON Schema config declaration |\n| `cli.ts` | CLI commands: `memory-pro list/search/stats/delete/delete-bulk/export/import/reembed/upgrade/migrate` |\n| `src/store.ts` | LanceDB storage layer. Table creation / FTS indexing / Vector search / BM25 search / CRUD |\n| `src/embedder.ts` | Embedding abstraction. Compatible with any OpenAI-compatible API provider |\n| `src/retriever.ts` | Hybrid retrieval engine. Vector + BM25 → Hybrid Fusion → Rerank → Lifecycle Decay → Filter |\n| `src/scopes.ts` | Multi-scope access control |\n| `src/tools.ts` | Agent tool definitions: `memory_recall`, `memory_store`, `memory_forget`, `memory_update` + management tools |\n| `src/noise-filter.ts` | Filters out agent refusals, meta-questions, greetings, and low-quality content |\n| `src/adaptive-retrieval.ts` | Determines whether a query needs memory retrieval |\n| `src/migrate.ts` | Migration from built-in `memory-lancedb` to Pro |\n| `src/smart-extractor.ts` | LLM-powered 6-category extraction with L0/L1/L2 layered storage and two-stage dedup |\n| `src/decay-engine.ts` | Weibull stretched-exponential decay model |\n| `src/tier-manager.ts` | Three-tier promotion/demotion: Peripheral ↔ Working ↔ Core |\n\n\u003c/details\u003e\n\n---\n\n## Core Features\n\n### Hybrid Retrieval\n\n```\nQuery → embedQuery() ─┐\n                       ├─→ Hybrid Fusion → Rerank → Lifecycle Decay Boost → Length Norm → Filter\nQuery → BM25 FTS ─────┘\n```\n\n- **Vector Search** — semantic similarity via LanceDB ANN (cosine distance)\n- **BM25 Full-Text Search** — exact keyword matching via LanceDB FTS index\n- **Hybrid Fusion** — vector score as base, BM25 hits receive a weighted boost (not standard RRF — tuned for real-world recall quality)\n- **Configurable Weights** — `vectorWeight`, `bm25Weight`, `minScore`\n\n### Cross-Encoder Reranking\n\n- Built-in adapters for **Jina**, **SiliconFlow**, **Voyage AI**, and **Pinecone**\n- Compatible with any Jina-compatible endpoint (e.g., Hugging Face TEI, DashScope)\n- Hybrid scoring: 60% cross-encoder + 40% original fused score\n- Graceful degradation: falls back to cosine similarity on API failure\n\n### Multi-Stage Scoring Pipeline\n\n| Stage | Effect |\n| --- | --- |\n| **Hybrid Fusion** | Combines semantic and exact-match recall |\n| **Cross-Encoder Rerank** | Promotes semantically precise hits |\n| **Lifecycle Decay Boost** | Weibull freshness + access frequency + importance × confidence |\n| **Length Normalization** | Prevents long entries from dominating (anchor: 500 chars) |\n| **Hard Min Score** | Removes irrelevant results (default: 0.35) |\n| **MMR Diversity** | Cosine similarity \u003e 0.85 → demoted |\n\n### Smart Memory Extraction (v1.1.0)\n\n- **LLM-Powered 6-Category Extraction**: profile, preferences, entities, events, cases, patterns\n- **L0/L1/L2 Layered Storage**: L0 (one-sentence index) → L1 (structured summary) → L2 (full narrative)\n- **Two-Stage Dedup**: vector similarity pre-filter (≥0.7) → LLM semantic decision (CREATE/MERGE/SKIP)\n- **Category-Aware Merge**: `profile` always merges, `events`/`cases` are append-only\n\n### Memory Lifecycle Management (v1.1.0)\n\n- **Weibull Decay Engine**: composite score = recency + frequency + intrinsic value\n- **Three-Tier Promotion**: `Peripheral ↔ Working ↔ Core` with configurable thresholds\n- **Access Reinforcement**: frequently recalled memories decay slower (spaced-repetition style)\n- **Importance-Modulated Half-Life**: important memories decay slower\n\n### Multi-Scope Isolation\n\n- Built-in scopes: `global`, `agent:\u003cid\u003e`, `custom:\u003cname\u003e`, `project:\u003cid\u003e`, `user:\u003cid\u003e`\n- Agent-level access control via `scopes.agentAccess`\n- Default: each agent accesses `global` + its own `agent:\u003cid\u003e` scope\n\n### Auto-Capture \u0026 Auto-Recall\n\n- **Auto-Capture** (`agent_end`): extracts preference/fact/decision/entity from conversations, deduplicates, stores up to 3 per turn\n- **Auto-Recall** (`before_prompt_build`): injects `\u003crelevant-memories\u003e` context (up to 3 entries)\n\n\u003e **Note (v1.1.0-beta.9+):** Auto-recall now uses the `before_prompt_build` hook instead of the deprecated `before_agent_start`. See [Hook Adaptation](#hook-adaptation-openclaw-20263) below for details.\n\n### Noise Filtering \u0026 Adaptive Retrieval\n\n- Filters low-quality content: agent refusals, meta-questions, greetings\n- Skips retrieval for greetings, slash commands, simple confirmations, emoji\n- Forces retrieval for memory keywords (\"remember\", \"previously\", \"last time\")\n- CJK-aware thresholds (Chinese: 6 chars vs English: 15 chars)\n\n---\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eCompared to Built-in \u003ccode\u003ememory-lancedb\u003c/code\u003e (click to expand)\u003c/strong\u003e\u003c/summary\u003e\n\n| Feature | Built-in `memory-lancedb` | **memory-lancedb-pro** |\n| --- | :---: | :---: |\n| Vector search | Yes | Yes |\n| BM25 full-text search | - | Yes |\n| Hybrid fusion (Vector + BM25) | - | Yes |\n| Cross-encoder rerank (multi-provider) | - | Yes |\n| Recency boost \u0026 time decay | - | Yes |\n| Length normalization | - | Yes |\n| MMR diversity | - | Yes |\n| Multi-scope isolation | - | Yes |\n| Noise filtering | - | Yes |\n| Adaptive retrieval | - | Yes |\n| Management CLI | - | Yes |\n| Session memory | - | Yes |\n| Task-aware embeddings | - | Yes |\n| **LLM Smart Extraction (6-category)** | - | Yes (v1.1.0) |\n| **Weibull Decay + Tier Promotion** | - | Yes (v1.1.0) |\n| Any OpenAI-compatible embedding | Limited | Yes |\n\n\u003c/details\u003e\n\n---\n\n## Configuration\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eFull Configuration Example\u003c/strong\u003e\u003c/summary\u003e\n\n```json\n{\n  \"embedding\": {\n    \"apiKey\": \"${JINA_API_KEY}\",\n    \"model\": \"jina-embeddings-v5-text-small\",\n    \"baseURL\": \"https://api.jina.ai/v1\",\n    \"dimensions\": 1024,\n    \"taskQuery\": \"retrieval.query\",\n    \"taskPassage\": \"retrieval.passage\",\n    \"normalized\": true\n  },\n  \"dbPath\": \"~/.openclaw/memory/lancedb-pro\",\n  \"autoCapture\": true,\n  \"autoRecall\": true,\n  \"retrieval\": {\n    \"mode\": \"hybrid\",\n    \"vectorWeight\": 0.7,\n    \"bm25Weight\": 0.3,\n    \"minScore\": 0.3,\n    \"rerank\": \"cross-encoder\",\n    \"rerankApiKey\": \"${JINA_API_KEY}\",\n    \"rerankModel\": \"jina-reranker-v3\",\n    \"rerankEndpoint\": \"https://api.jina.ai/v1/rerank\",\n    \"rerankProvider\": \"jina\",\n    \"candidatePoolSize\": 20,\n    \"recencyHalfLifeDays\": 14,\n    \"recencyWeight\": 0.1,\n    \"filterNoise\": true,\n    \"lengthNormAnchor\": 500,\n    \"hardMinScore\": 0.35,\n    \"timeDecayHalfLifeDays\": 60,\n    \"reinforcementFactor\": 0.5,\n    \"maxHalfLifeMultiplier\": 3\n  },\n  \"enableManagementTools\": false,\n  \"scopes\": {\n    \"default\": \"global\",\n    \"definitions\": {\n      \"global\": { \"description\": \"Shared knowledge\" },\n      \"agent:discord-bot\": { \"description\": \"Discord bot private\" }\n    },\n    \"agentAccess\": {\n      \"discord-bot\": [\"global\", \"agent:discord-bot\"]\n    }\n  },\n  \"sessionMemory\": {\n    \"enabled\": false,\n    \"messageCount\": 15\n  },\n  \"smartExtraction\": true,\n  \"llm\": {\n    \"apiKey\": \"${OPENAI_API_KEY}\",\n    \"model\": \"gpt-4o-mini\",\n    \"baseURL\": \"https://api.openai.com/v1\"\n  },\n  \"extractMinMessages\": 2,\n  \"extractMaxChars\": 8000\n}\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eEmbedding Providers\u003c/strong\u003e\u003c/summary\u003e\n\nWorks with **any OpenAI-compatible embedding API**:\n\n| Provider | Model | Base URL | Dimensions |\n| --- | --- | --- | --- |\n| **Jina** (recommended) | `jina-embeddings-v5-text-small` | `https://api.jina.ai/v1` | 1024 |\n| **OpenAI** | `text-embedding-3-small` | `https://api.openai.com/v1` | 1536 |\n| **Voyage** | `voyage-4-lite` / `voyage-4` | `https://api.voyageai.com/v1` | 1024 / 1024 |\n| **Google Gemini** | `gemini-embedding-001` | `https://generativelanguage.googleapis.com/v1beta/openai/` | 3072 |\n| **Ollama** (local) | `nomic-embed-text` | `http://localhost:11434/v1` | provider-specific |\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eRerank Providers\u003c/strong\u003e\u003c/summary\u003e\n\nCross-encoder reranking supports multiple providers via `rerankProvider`:\n\n| Provider | `rerankProvider` | Example Model |\n| --- | --- | --- |\n| **Jina** (default) | `jina` | `jina-reranker-v3` |\n| **SiliconFlow** (free tier available) | `siliconflow` | `BAAI/bge-reranker-v2-m3` |\n| **Voyage AI** | `voyage` | `rerank-2.5` |\n| **Pinecone** | `pinecone` | `bge-reranker-v2-m3` |\n\nAny Jina-compatible rerank endpoint also works — set `rerankProvider: \"jina\"` and point `rerankEndpoint` to your service (e.g., Hugging Face TEI, DashScope `qwen3-rerank`).\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eSmart Extraction (LLM) — v1.1.0\u003c/strong\u003e\u003c/summary\u003e\n\nWhen `smartExtraction` is enabled (default: `true`), the plugin uses an LLM to intelligently extract and classify memories instead of regex-based triggers.\n\n| Field | Type | Default | Description |\n|-------|------|---------|-------------|\n| `smartExtraction` | boolean | `true` | Enable/disable LLM-powered 6-category extraction |\n| `llm.auth` | string | `api-key` | `api-key` uses `llm.apiKey` / `embedding.apiKey`; `oauth` uses a plugin-scoped OAuth token file by default |\n| `llm.apiKey` | string | *(falls back to `embedding.apiKey`)* | API key for the LLM provider |\n| `llm.model` | string | `openai/gpt-oss-120b` | LLM model name |\n| `llm.baseURL` | string | *(falls back to `embedding.baseURL`)* | LLM API endpoint |\n| `llm.oauthProvider` | string | `openai-codex` | OAuth provider id used when `llm.auth` is `oauth` |\n| `llm.oauthPath` | string | `~/.openclaw/.memory-lancedb-pro/oauth.json` | OAuth token file used when `llm.auth` is `oauth` |\n| `llm.timeoutMs` | number | `30000` | LLM request timeout in milliseconds |\n| `extractMinMessages` | number | `2` | Minimum messages before extraction triggers |\n| `extractMaxChars` | number | `8000` | Maximum characters sent to the LLM |\n\n\nOAuth `llm` config (use existing Codex / ChatGPT login cache for LLM calls):\n```json\n{\n  \"llm\": {\n    \"auth\": \"oauth\",\n    \"oauthProvider\": \"openai-codex\",\n    \"model\": \"gpt-5.4\",\n    \"oauthPath\": \"${HOME}/.openclaw/.memory-lancedb-pro/oauth.json\",\n    \"timeoutMs\": 30000\n  }\n}\n```\n\nNotes for `llm.auth: \"oauth\"`:\n\n- `llm.oauthProvider` is currently `openai-codex`.\n- OAuth tokens default to `~/.openclaw/.memory-lancedb-pro/oauth.json`.\n- You can set `llm.oauthPath` if you want to store that file somewhere else.\n- `auth login` snapshots the previous api-key `llm` config next to the OAuth file, and `auth logout` restores that snapshot when available.\n- Switching from `api-key` to `oauth` does not automatically carry over `llm.baseURL`. Set it manually in OAuth mode only when you intentionally want a custom ChatGPT/Codex-compatible backend.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eLifecycle Configuration (Decay + Tier)\u003c/strong\u003e\u003c/summary\u003e\n\n| Field | Default | Description |\n|-------|---------|-------------|\n| `decay.recencyHalfLifeDays` | `30` | Base half-life for Weibull recency decay |\n| `decay.frequencyWeight` | `0.3` | Weight of access frequency in composite score |\n| `decay.intrinsicWeight` | `0.3` | Weight of `importance × confidence` |\n| `decay.betaCore` | `0.8` | Weibull beta for `core` memories |\n| `decay.betaWorking` | `1.0` | Weibull beta for `working` memories |\n| `decay.betaPeripheral` | `1.3` | Weibull beta for `peripheral` memories |\n| `tier.coreAccessThreshold` | `10` | Min recall count before promoting to `core` |\n| `tier.peripheralAgeDays` | `60` | Age threshold for demoting stale memories |\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eAccess Reinforcement\u003c/strong\u003e\u003c/summary\u003e\n\nFrequently recalled memories decay more slowly (spaced-repetition style).\n\nConfig keys (under `retrieval`):\n- `reinforcementFactor` (0-2, default: `0.5`) — set `0` to disable\n- `maxHalfLifeMultiplier` (1-10, default: `3`) — hard cap on effective half-life\n\n\u003c/details\u003e\n\n---\n\n## CLI Commands\n\n```bash\nopenclaw memory-pro list [--scope global] [--category fact] [--limit 20] [--json]\nopenclaw memory-pro search \"query\" [--scope global] [--limit 10] [--json]\nopenclaw memory-pro stats [--scope global] [--json]\nopenclaw memory-pro auth login [--provider openai-codex] [--model gpt-5.4] [--oauth-path /abs/path/oauth.json]\nopenclaw memory-pro auth status\nopenclaw memory-pro auth logout\nopenclaw memory-pro delete \u003cid\u003e\nopenclaw memory-pro delete-bulk --scope global [--before 2025-01-01] [--dry-run]\nopenclaw memory-pro export [--scope global] [--output memories.json]\nopenclaw memory-pro import memories.json [--scope global] [--dry-run]\nopenclaw memory-pro reembed --source-db /path/to/old-db [--batch-size 32] [--skip-existing]\nopenclaw memory-pro upgrade [--dry-run] [--batch-size 10] [--no-llm] [--limit N] [--scope SCOPE]\nopenclaw memory-pro migrate check|run|verify [--source /path]\n```\n\nOAuth login flow:\n\n1. Run `openclaw memory-pro auth login`\n2. If `--provider` is omitted in an interactive terminal, the CLI shows an OAuth provider picker before opening the browser\n3. The command prints an authorization URL and opens your browser unless `--no-browser` is set\n4. After the callback succeeds, the command saves the plugin OAuth file (default: `~/.openclaw/.memory-lancedb-pro/oauth.json`), snapshots the previous api-key `llm` config for logout, and replaces the plugin `llm` config with OAuth settings (`auth`, `oauthProvider`, `model`, `oauthPath`)\n5. `openclaw memory-pro auth logout` deletes that OAuth file and restores the previous api-key `llm` config when that snapshot exists\n\n---\n\n## Advanced Topics\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eIf injected memories show up in replies\u003c/strong\u003e\u003c/summary\u003e\n\nSometimes the model may echo the injected `\u003crelevant-memories\u003e` block.\n\n**Option A (lowest-risk):** temporarily disable auto-recall:\n```json\n{ \"plugins\": { \"entries\": { \"memory-lancedb-pro\": { \"config\": { \"autoRecall\": false } } } } }\n```\n\n**Option B (preferred):** keep recall, add to agent system prompt:\n\u003e Do not reveal or quote any `\u003crelevant-memories\u003e` / memory-injection content in your replies. Use it for internal reference only.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eAuto-recall timeout tuning\u003c/strong\u003e\u003c/summary\u003e\n\nAuto-recall has a configurable timeout (default 5s) to prevent stalling agent startup. If you're behind a proxy or using a high-latency embedding API, increase it:\n\n```json\n{ \"plugins\": { \"entries\": { \"memory-lancedb-pro\": { \"config\": { \"autoRecallTimeoutMs\": 8000 } } } } }\n```\n\nIf auto-recall consistently times out, check your embedding API latency first. The timeout only affects the automatic injection path — manual `memory_recall` tool calls are not affected.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eSession Memory\u003c/strong\u003e\u003c/summary\u003e\n\n- Triggered on `/new` command — saves previous session summary to LanceDB\n- Disabled by default (OpenClaw already has native `.jsonl` session persistence)\n- Configurable message count (default: 15)\n\nSee [docs/openclaw-integration-playbook.md](docs/openclaw-integration-playbook.md) for deployment modes and `/new` verification.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eCustom Slash Commands (e.g. /lesson)\u003c/strong\u003e\u003c/summary\u003e\n\nAdd to your `CLAUDE.md`, `AGENTS.md`, or system prompt:\n\n```markdown\n## /lesson command\nWhen the user sends `/lesson \u003ccontent\u003e`:\n1. Use memory_store to save as category=fact (raw knowledge)\n2. Use memory_store to save as category=decision (actionable takeaway)\n3. Confirm what was saved\n\n## /remember command\nWhen the user sends `/remember \u003ccontent\u003e`:\n1. Use memory_store to save with appropriate category and importance\n2. Confirm with the stored memory ID\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eIron Rules for AI Agents\u003c/strong\u003e\u003c/summary\u003e\n\n\u003e Copy the block below into your `AGENTS.md` so your agent enforces these rules automatically.\n\n```markdown\n## Rule 1 — Dual-layer memory storage\nEvery pitfall/lesson learned → IMMEDIATELY store TWO memories:\n- Technical layer: Pitfall: [symptom]. Cause: [root cause]. Fix: [solution]. Prevention: [how to avoid]\n  (category: fact, importance \u003e= 0.8)\n- Principle layer: Decision principle ([tag]): [behavioral rule]. Trigger: [when]. Action: [what to do]\n  (category: decision, importance \u003e= 0.85)\n\n## Rule 2 — LanceDB hygiene\nEntries must be short and atomic (\u003c 500 chars). No raw conversation summaries or duplicates.\n\n## Rule 3 — Recall before retry\nOn ANY tool failure, ALWAYS memory_recall with relevant keywords BEFORE retrying.\n\n## Rule 4 — Confirm target codebase\nConfirm you are editing memory-lancedb-pro vs built-in memory-lancedb before changes.\n\n## Rule 5 — Clear jiti cache after plugin code changes\nAfter modifying .ts files under plugins/, MUST run rm -rf /tmp/jiti/ BEFORE openclaw gateway restart.\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eDatabase Schema\u003c/strong\u003e\u003c/summary\u003e\n\nLanceDB table `memories`:\n\n| Field | Type | Description |\n| --- | --- | --- |\n| `id` | string (UUID) | Primary key |\n| `text` | string | Memory text (FTS indexed) |\n| `vector` | float[] | Embedding vector |\n| `category` | string | Storage category: `preference` / `fact` / `decision` / `entity` / `reflection` / `other` |\n| `scope` | string | Scope identifier (e.g., `global`, `agent:main`) |\n| `importance` | float | Importance score 0-1 |\n| `timestamp` | int64 | Creation timestamp (ms) |\n| `metadata` | string (JSON) | Extended metadata |\n\nCommon `metadata` keys in v1.1.0: `l0_abstract`, `l1_overview`, `l2_content`, `memory_category`, `tier`, `access_count`, `confidence`, `last_accessed_at`\n\n\u003e **Note on categories:** The top-level `category` field uses 6 storage categories. The 6-category semantic labels from Smart Extraction (`profile` / `preferences` / `entities` / `events` / `cases` / `patterns`) are stored in `metadata.memory_category`.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eTroubleshooting\u003c/strong\u003e\u003c/summary\u003e\n\n### \"Cannot mix BigInt and other types\" (LanceDB / Apache Arrow)\n\nOn LanceDB 0.26+, some numeric columns may be returned as `BigInt`. Upgrade to **memory-lancedb-pro \u003e= 1.0.14** — this plugin now coerces values using `Number(...)` before arithmetic.\n\n\u003c/details\u003e\n\n---\n\n## Hook Adaptation (OpenClaw 2026.3+)\n\nStarting with v1.1.0-beta.9, the plugin's lifecycle hooks have been updated for compatibility with the refactored OpenClaw plugin system.\n\n### What changed\n\n| Hook | Before | After | Why |\n|------|--------|-------|-----|\n| Auto-recall | `before_agent_start` | `before_prompt_build` (priority 10) | `before_agent_start` is deprecated; `before_prompt_build` is the recommended hook for prompt mutation |\n| Reflection invariants | `before_agent_start` | `before_prompt_build` (priority 12) | Same reason as above |\n| Reflection derived focus | `before_prompt_build` | `before_prompt_build` (priority 15) | Unchanged event, added explicit priority |\n| All other lifecycle hooks | unchanged | unchanged | `agent_end`, `after_tool_call`, `session_end`, `message_received`, `before_message_write` |\n\n### Hook API distinction\n\nOpenClaw exposes two hook registration methods. They write to **different registries**:\n\n| Method | Registry | Dispatch | Use for |\n|--------|----------|----------|---------|\n| `api.on(event, handler, opts)` | `registry.typedHooks` | Dispatched by the lifecycle hook runner | Lifecycle events: `before_prompt_build`, `agent_end`, `after_tool_call`, `session_end`, `message_received`, `before_message_write` |\n| `api.registerHook(event, handler, opts)` | `registry.hooks` | Dispatched by the internal hook system | Command/bootstrap events: `command:new`, `command:reset`, `agent:bootstrap` |\n\nUsing the wrong method causes hooks to register silently without firing. This plugin uses `api.on()` for all lifecycle hooks and `api.registerHook()` for command hooks.\n\n### Verifying hooks after install\n\n```bash\nopenclaw plugins info memory-lancedb-pro\n```\n\nYou should see:\n\n```\nLegacy before_agent_start: no\n\nTyped hooks:\n  agent_end\n  before_message_write\n  before_prompt_build (priority 10)\n  message_received\n\nCustom hooks:\n  memory-lancedb-pro-session-memory: command:new\n```\n\nIf `Legacy before_agent_start: yes` appears, you are running an older version of the plugin.\n\n### Migration from older versions\n\nIf you are upgrading from v1.1.0-beta.8 or earlier:\n\n1. Replace the plugin files (copy or `openclaw plugins install`)\n2. Clear the jiti cache: `rm -rf /tmp/jiti/`\n3. Restart the gateway: `openclaw gateway restart`\n4. Verify: `openclaw plugins info memory-lancedb-pro` should show `Legacy before_agent_start: no`\n\nNo config changes or data migration required. All existing memories, scopes, and settings are preserved.\n\n### OpenClaw version requirements\n\n- **Minimum:** OpenClaw 2026.3.22\n- **Recommended:** OpenClaw latest (2026.3.23+)\n\nThis version uses `before_prompt_build` hooks (replacing the deprecated `before_agent_start`), which requires OpenClaw 2026.3.22 or later. Running `openclaw doctor --fix` after upgrading will automatically migrate plugin config (e.g. `minimax-portal-auth` → `minimax`, Brave search as a standalone plugin).\n\nTo upgrade OpenClaw:\n\n```bash\nnpm update -g openclaw\nopenclaw --version    # verify \u003e= 2026.3.22\nopenclaw doctor --fix # resolve any stale config after upgrade\n```\n\n---\n\n## Documentation\n\n| Document | Description |\n| --- | --- |\n| [OpenClaw Integration Playbook](docs/openclaw-integration-playbook.md) | Deployment modes, verification, regression matrix |\n| [Memory Architecture Analysis](docs/memory_architecture_analysis.md) | Full architecture deep-dive |\n| [CHANGELOG v1.1.0](docs/CHANGELOG-v1.1.0.md) | v1.1.0 behavior changes and upgrade rationale |\n| [Long-Context Chunking](docs/long-context-chunking.md) | Chunking strategy for long documents |\n\n---\n\n## Beta: Smart Memory v1.1.0\n\n\u003e Status: Beta — available via `npm i memory-lancedb-pro@beta`. Stable users on `latest` are not affected.\n\n| Feature | Description |\n|---------|-------------|\n| **Smart Extraction** | LLM-powered 6-category extraction with L0/L1/L2 metadata. Falls back to regex when disabled. |\n| **Lifecycle Scoring** | Weibull decay integrated into retrieval — high-frequency and high-importance memories rank higher. |\n| **Tier Management** | Three-tier system (Core → Working → Peripheral) with automatic promotion/demotion. |\n\nFeedback: [GitHub Issues](https://github.com/CortexReach/memory-lancedb-pro/issues) · Revert: `npm i memory-lancedb-pro@latest`\n\n---\n\n## Dependencies\n\n| Package | Purpose |\n| --- | --- |\n| `@lancedb/lancedb` ≥0.26.2 | Vector database (ANN + FTS) |\n| `openai` ≥6.21.0 | OpenAI-compatible Embedding API client |\n| `@sinclair/typebox` 0.34.48 | JSON Schema type definitions |\n\n---\n\n## Contributors\n\n\u003cp\u003e\n\u003ca href=\"https://github.com/win4r\"\u003e\u003cimg src=\"https://avatars.githubusercontent.com/u/42172631?v=4\" width=\"48\" height=\"48\" alt=\"@win4r\" /\u003e\u003c/a\u003e\n\u003ca href=\"https://github.com/kctony\"\u003e\u003cimg src=\"https://avatars.githubusercontent.com/u/1731141?v=4\" width=\"48\" height=\"48\" alt=\"@kctony\" /\u003e\u003c/a\u003e\n\u003ca href=\"https://github.com/Akatsuki-Ryu\"\u003e\u003cimg src=\"https://avatars.githubusercontent.com/u/8062209?v=4\" width=\"48\" height=\"48\" alt=\"@Akatsuki-Ryu\" /\u003e\u003c/a\u003e\n\u003ca href=\"https://github.com/JasonSuz\"\u003e\u003cimg src=\"https://avatars.githubusercontent.com/u/612256?v=4\" width=\"48\" height=\"48\" alt=\"@JasonSuz\" /\u003e\u003c/a\u003e\n\u003ca href=\"https://github.com/Minidoracat\"\u003e\u003cimg src=\"https://avatars.githubusercontent.com/u/11269639?v=4\" width=\"48\" height=\"48\" alt=\"@Minidoracat\" /\u003e\u003c/a\u003e\n\u003ca href=\"https://github.com/furedericca-lab\"\u003e\u003cimg src=\"https://avatars.githubusercontent.com/u/263020793?v=4\" width=\"48\" height=\"48\" alt=\"@furedericca-lab\" /\u003e\u003c/a\u003e\n\u003ca href=\"https://github.com/joe2643\"\u003e\u003cimg src=\"https://avatars.githubusercontent.com/u/19421931?v=4\" width=\"48\" height=\"48\" alt=\"@joe2643\" /\u003e\u003c/a\u003e\n\u003ca href=\"https://github.com/AliceLJY\"\u003e\u003cimg src=\"https://avatars.githubusercontent.com/u/136287420?v=4\" width=\"48\" height=\"48\" alt=\"@AliceLJY\" /\u003e\u003c/a\u003e\n\u003ca href=\"https://github.com/chenjiyong\"\u003e\u003cimg src=\"https://avatars.githubusercontent.com/u/8199522?v=4\" width=\"48\" height=\"48\" alt=\"@chenjiyong\" /\u003e\u003c/a\u003e\n\u003c/p\u003e\n\nFull list: [Contributors](https://github.com/CortexReach/memory-lancedb-pro/graphs/contributors)\n\n## Star History\n\n\u003ca href=\"https://star-history.com/#CortexReach/memory-lancedb-pro\u0026Date\"\u003e\n  \u003cpicture\u003e\n    \u003csource media=\"(prefers-color-scheme: dark)\" srcset=\"https://api.star-history.com/svg?repos=CortexReach/memory-lancedb-pro\u0026type=Date\u0026theme=dark\u0026transparent=true\" /\u003e\n    \u003csource media=\"(prefers-color-scheme: light)\" srcset=\"https://api.star-history.com/svg?repos=CortexReach/memory-lancedb-pro\u0026type=Date\u0026transparent=true\" /\u003e\n    \u003cimg alt=\"Star History Chart\" src=\"https://api.star-history.com/svg?repos=CortexReach/memory-lancedb-pro\u0026type=Date\u0026transparent=true\" /\u003e\n  \u003c/picture\u003e\n\u003c/a\u003e\n\n## License\n\nMIT\n\n---\n\n## My WeChat QR Code\n\n\u003cimg src=\"https://github.com/win4r/AISuperDomain/assets/42172631/7568cf78-c8ba-4182-aa96-d524d903f2bc\" width=\"214.8\" height=\"291\"\u003e\n","funding_links":[],"categories":["TypeScript"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FCortexReach%2Fmemory-lancedb-pro","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FCortexReach%2Fmemory-lancedb-pro","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FCortexReach%2Fmemory-lancedb-pro/lists"}