{"id":37704962,"url":"https://github.com/memfuse/memfuse","last_synced_at":"2026-01-16T13:07:30.088Z","repository":{"id":295523558,"uuid":"990154043","full_name":"memfuse/memfuse","owner":"memfuse","description":"Official Core Services for MemFuse - the lightning-fast open-source memory layer that gives LLMs persistent, queryable memory across conversations and sessions.","archived":false,"fork":false,"pushed_at":"2025-09-24T09:29:05.000Z","size":2458,"stargazers_count":18,"open_issues_count":0,"forks_count":2,"subscribers_count":0,"default_branch":"main","last_synced_at":"2025-09-24T11:36:40.794Z","etag":null,"topics":["ai","artificial-intelligence","chatbot","conversation-memory","llm","machine-learning","memory","openai","persistent-memory","python-sdk","rag","vector-database"],"latest_commit_sha":null,"homepage":"https://memfuse.vercel.app/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/memfuse.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-05-25T16:01:39.000Z","updated_at":"2025-08-24T15:49:52.000Z","dependencies_parsed_at":"2025-05-26T01:43:42.928Z","dependency_job_id":"92a90046-f07e-4563-9309-88da1b75522f","html_url":"https://github.com/memfuse/memfuse","commit_stats":null,"previous_names":["memfuse/memfuse"],"tags_count":4,"template":false,"template_full_name":null,"purl":"pkg:github/memfuse/memfuse","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/memfuse%2Fmemfuse","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/memfuse%2Fmemfuse/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/memfuse%2Fmemfuse/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/memfuse%2Fmemfuse/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/memfuse","download_url":"https://codeload.github.com/memfuse/memfuse/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/memfuse%2Fmemfuse/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28478922,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-16T11:59:17.896Z","status":"ssl_error","status_checked_at":"2026-01-16T11:55:55.838Z","response_time":107,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","artificial-intelligence","chatbot","conversation-memory","llm","machine-learning","memory","openai","persistent-memory","python-sdk","rag","vector-database"],"created_at":"2026-01-16T13:07:28.188Z","updated_at":"2026-01-16T13:07:30.079Z","avatar_url":"https://github.com/memfuse.png","language":"Python","funding_links":[],"categories":[],"sub_categories":[],"readme":"\u003ca id=\"readme-top\"\u003e\u003c/a\u003e\n\n[![GitHub license](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/Percena/MemFuse/blob/readme/LICENSE)\n\n\u003c!-- PROJECT LOGO --\u003e\n\u003cbr /\u003e\n\u003cdiv align=\"center\"\u003e\n  \u003ca href=\"https://memfuse.vercel.app/\"\u003e\n    \u003cimg src=\"docs/assets/logo.png\" alt=\"MemFuse Logo\"\n         style=\"max-width: 90%; height: auto; display: block; margin: 0 auto; padding-left: 16px; padding-right: 16px;\"\u003e\n  \u003c/a\u003e\n  \u003cbr /\u003e\n  \u003cbr /\u003e\n\n  \u003cp align=\"center\"\u003e\n    \u003cstrong\u003eMemFuse Core Services\u003c/strong\u003e\n    \u003cbr /\u003e\n    The official core services for MemFuse, the open-source memory layer for LLMs.\n    \u003cbr /\u003e\n    \u003ca href=\"https://memfuse.vercel.app/\"\u003e\u003cstrong\u003eExplore the Docs »\u003c/strong\u003e\u003c/a\u003e\n    \u003cbr /\u003e\n    \u003cbr /\u003e\n    \u003ca href=\"https://memfuse.vercel.app/\"\u003eView Demo\u003c/a\u003e\n    \u0026middot;\n    \u003ca href=\"https://github.com/memfuse/memfuse/issues\"\u003eReport Bug\u003c/a\u003e\n    \u0026middot;\n    \u003ca href=\"https://github.com/memfuse/memfuse/issues\"\u003eRequest Feature\u003c/a\u003e\n  \u003c/p\u003e\n\u003c/div\u003e\n\n\u003c!-- TABLE OF CONTENTS --\u003e\n\u003cdetails\u003e\n  \u003csummary\u003eTable of Contents\u003c/summary\u003e\n  \u003col\u003e\n    \u003cli\u003e\n      \u003ca href=\"#why-memfuse\"\u003eWhy MemFuse?\u003c/a\u003e\n    \u003c/li\u003e\n    \u003cli\u003e\n      \u003ca href=\"#key-features\"\u003eKey Features\u003c/a\u003e\n    \u003c/li\u003e\n    \u003cli\u003e\u003ca href=\"#quick-start\"\u003eQuick Start\u003c/a\u003e\u003c/li\u003e\n    \u003cli\u003e\u003ca href=\"#documentation\"\u003eDocumentation\u003c/a\u003e\u003c/li\u003e\n    \u003cli\u003e\u003ca href=\"#roadmap\"\u003eRoadmap\u003c/a\u003e\u003c/li\u003e\n    \u003cli\u003e\u003ca href=\"#community-support\"\u003eCommunity \u0026 Support\u003c/a\u003e\u003c/li\u003e\n  \u003c/ol\u003e\n\u003c/details\u003e\n\n## Why MemFuse?\n\nLarge language model applications are inherently stateless by design.\nWhen the context window reaches its limit, previous conversations, user preferences, and critical information simply disappear.\n\n**MemFuse** bridges this gap by providing a persistent, queryable memory layer between your LLM and storage backend, enabling AI agents to:\n\n- **Remember** user preferences and context across sessions\n- **Recall** facts and events from thousands of interactions later\n- **Optimize** token usage by avoiding redundant chat history resending\n- **Learn** continuously and improve performance over time\n\nThis repository contains the official server core services for seamless integration with MemFuse Client SDK. For comprehensive information about the MemFuse Client features, please visit the [MemFuse Client Python SDK](https://github.com/memfuse/memfuse-python).\n\n## ✨ Key Features\n\n| Category                          | What you get                                                                                                                      |\n| --------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- |\n| **Lightning Fast**                | Efficient buffering with write aggregation, intelligent prefetching, and query caching for exceptional performance                |\n| **Unified Cognitive Search**      | Seamlessly combines vector, graph, and keyword search with intelligent fusion and re-ranking for superior accuracy and insights   |\n| **Cognitive Memory Architecture** | Human-inspired layered memory system: M0 (raw data/episodic), M1 (structured facts/semantic), and M2 (knowledge graph/conceptual) |\n| **Local-First**                   | Run the server locally or deploy with Docker — no mandatory cloud dependencies or fees                                            |\n| **Pluggable Backends**            | Built on TimescaleDB with custom pgai implementation, compatible with Qdrant, Neo4j, Redis, and expanding backend support        |\n| **Multi-Tenant Support**          | Secure isolation between users, agents, and sessions with robust scoping and access controls                                      |\n| **Framework-Friendly**            | Seamless integration with LangChain, AutoGen, Vercel AI SDK, and direct OpenAI/Anthropic/Gemini/Ollama API calls                  |\n| **Production-Ready Testing**      | Comprehensive layered testing framework (smoke, integration, e2e, performance) ensuring reliability at scale                      |\n| **Apache 2.0 Licensed**           | Fully open source — fork, extend, customize, and deploy as you need                                                               |\n\n---\n\n## 🚀 Quick Start\n\n### Installation\n\n\u003e **Note**: This repository contains the **MemFuse Core Server**. If you need to know more about the standalone Python SDK for client applications, please visit the [MemFuse Client Python SDK](https://github.com/memfuse/memfuse-python).\n\n#### Setting Up the MemFuse Server\n\nTo set up the MemFuse server locally:\n\n1.  Clone this repository:\n\n    ```bash\n    git clone https://github.com/memfuse/memfuse.git\n    cd memfuse\n    ```\n\n2.  Install dependencies and run the server using one of the following methods:\n\n    **Using Poetry (Recommended)**\n\n    ```bash\n    # Ensure Docker daemon is running first\n    docker --version  # Verify Docker is available\n    \n    poetry install\n    # TimescaleDB database will start automatically via Docker\n    poetry run python scripts/memfuse_launcher.py\n    ```\n\n    **Using pip**\n\n    ```bash\n    # Ensure Docker daemon is running first\n    docker --version  # Verify Docker is available\n    \n    pip install -e .\n    # TimescaleDB database will start automatically via Docker\n    python -m memfuse_core\n    ```\n\n#### Installing the Client SDK\n\nTo use MemFuse in your applications, install the Python SDK simply from PyPI\n\n```bash\npip install memfuse\n```\n\n### Database Requirements\n\nMemFuse uses **TimescaleDB** as its primary database backend with a custom pgai-like implementation for advanced vector operations and automatic embedding generation.\n\n**Prerequisites:**\n- **Docker daemon must be running** (required for database startup)\n- Docker and Docker Compose installed\n- **TimescaleDB**: Automatically provided via `timescale/timescaledb-ha:pg17` Docker image\n- **pgvector**: Vector similarity search (included in TimescaleDB image)\n- **Custom pgai**: Event-driven embedding system (built-in, no external extension needed)\n\n\u003e **⚠️ Important**: Make sure Docker daemon is running before starting MemFuse. The service automatically starts the TimescaleDB container.\n\n\u003e **Note**: MemFuse implements its own pgai-like functionality and does **not** require TimescaleDB's official pgai extension.\n\nFor detailed installation instructions, configuration options, and troubleshooting tips, see the online [Installation Guide](https://memfuse.vercel.app/docs/installation).\n\n### Basic Usage\n\nHere's a comprehensive example demonstrating how to use the MemFuse Python SDK with OpenAI to interact with the MemFuse server:\n\n```python\nfrom memfuse.llm import OpenAI\nfrom memfuse import MemFuse\nimport os\n\nmemfuse_client = MemFuse(\n  # api_key=os.getenv(\"MEMFUSE_API_KEY\")\n  # base_url=os.getenv(\"MEMFUSE_BASE_URL\"),\n)\n\nmemory = memfuse_client.init(\n  user=\"alice\",\n  # agent=\"agent_default\",\n  # session=\u003crandomly-generated-uuid\u003e\n)\n\n# Initialize your LLM client with the memory scope\nllm_client = OpenAI(\n    api_key=os.getenv(\"OPENAI_API_KEY\"),  # Your OpenAI API key\n    memory=memory\n)\n\n# Make a chat completion request\nresponse = llm_client.chat.completions.create(\n    model=\"gpt-4o\", # Or any model supported by your LLM provider\n    messages=[{\"role\": \"user\", \"content\": \"I'm planning a trip to Mars. What is the gravity there?\"}]\n)\n\nprint(f\"Response: {response.choices[0].message.content}\")\n# Example Output: Response: Mars has a gravity of about 3.721 m/s², which is about 38% of Earth's gravity.\n```\n\n### Contextual Follow-up\n\nNow, ask a follow-up question. MemFuse will automatically recall relevant context from the previous conversation:\n\n```python\n# Ask a follow-up question. MemFuse automatically recalls relevant context.\nfollowup_response = llm_client.chat.completions.create(\n    model=\"gpt-4o\",\n    messages=[{\"role\": \"user\", \"content\": \"What are some challenges of living on that planet?\"}]\n)\n\nprint(f\"Follow-up: {followup_response.choices[0].message.content}\")\n# Example Output: Follow-up: Some challenges of living on Mars include its thin atmosphere, extreme temperatures, high radiation levels, and the lack of liquid water on the surface.\n```\n\n🔥 **That's it!** Every subsequent call under the same scope automatically stores notable facts to memory and retrieves them when relevant.\n\n### Database Management\n\nMemFuse provides comprehensive database management tools:\n\n```bash\n# Check database status and extensions\npoetry run python scripts/database_manager.py status\n\n# Reset data (keep schema) \npoetry run python scripts/database_manager.py reset\n\n# Validate schema and triggers\npoetry run python scripts/database_manager.py validate\n\n# Complete schema rebuild (⚠️ destroys data)\npoetry run python scripts/database_manager.py recreate\n```\n\nFor detailed database management, see the [Scripts Documentation](scripts/README.md).\n\n---\n\n## 📚 Documentation\n\n- **[Installation Guide](https://memfuse.vercel.app/docs/installation)**: Comprehensive instructions for installing and configuring MemFuse\n- **[Getting Started](https://memfuse.vercel.app/docs/quickstart)**: Step-by-step guide to integrating MemFuse into your projects\n- **[Examples](https://github.com/memfuse/memfuse-python/tree/main/examples)**: Sample implementations for chatbots, autonomous agents, customer support, LangChain integration, and more\n- **[Testing Guide](tests/TEST.md)**: Practical guide for running tests with different configurations\n\n---\n\n## 🛣 Roadmap\n\n### 📦 Phase 1 – MVP (\"Fast \u0026 Transparent Core\")\n\n- [x] **Lightning-fast performance** — Efficient buffering with write aggregation, intelligent prefetching, and query caching\n- [x] **Level 0 Memory Layer** — Raw chat history storage and retrieval\n- [x] **Multi-tenant support** — Secure user, agent, and session isolation\n- [x] **Level 1 Memory Layer** — Semantic and episodic memory processing\n- [x] **TimescaleDB Integration** — Production-ready database backend with custom pgai implementation\n- [x] **Enhanced Memory Architecture** — M0 (raw), M1 (episodic), M2 (semantic), M3 (procedural), MSMG (graph) layers\n- [x] **Complete REST API** — Users, Agents, Sessions, Messages, and Knowledge endpoints\n- [x] **Re-ranking plugin** — LLM-powered memory relevance scoring\n- [x] **Python SDK** — Complete client library for Python applications\n- [x] **Benchmarks** — LongMemEval and MSC evaluation frameworks\n\n### 🧭 Phase 2 – Temporal Mastery \u0026 Quality\n\n- [ ] **JavaScript SDK** — Client library for Node.js and browser applications\n- [ ] **Multimodal memory support** — Image, audio, and video memory capabilities\n- [ ] **Level 2 KG memory support** — Knowledge graph-based conceptual memory\n- [ ] **Time-decay policies** — Automatic forgetting of stale information\n\n💡 **Have an idea?** Open an issue or participate in our discussion board!\n\n## 🤝 Community \u0026 Support\n\n- **GitHub Discussions**: Participate in roadmap votes, RFCs, and Q\u0026A sessions\n- **Issues**: Report bugs and request new features\n- **Documentation**: Comprehensive guides and API references\n\nIf MemFuse enhances your projects, please ⭐ **star the repository** — it helps the project grow and reach more developers!\n\n## License\n\nThis MemFuse Server repository is licensed under the Apache 2.0 License. See the [LICENSE](LICENSE) file for complete details.\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmemfuse%2Fmemfuse","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmemfuse%2Fmemfuse","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmemfuse%2Fmemfuse/lists"}