{"id":44237091,"url":"https://github.com/agentscope-ai/reme","last_synced_at":"2026-02-19T19:00:44.389Z","repository":{"id":255288087,"uuid":"849121458","full_name":"agentscope-ai/ReMe","owner":"agentscope-ai","description":"ReMe: Memory Management Kit for Agents - Remember Me, Refine Me.","archived":false,"fork":false,"pushed_at":"2026-02-17T04:13:26.000Z","size":35424,"stargazers_count":978,"open_issues_count":25,"forks_count":96,"subscribers_count":9,"default_branch":"main","last_synced_at":"2026-02-17T10:54:09.285Z","etag":null,"topics":["agent","ai-agents","memory","memoryscope","rag","reme"],"latest_commit_sha":null,"homepage":"https://reme.agentscope.io","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/agentscope-ai.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2024-08-29T02:40:53.000Z","updated_at":"2026-02-17T03:22:08.000Z","dependencies_parsed_at":"2024-12-14T08:02:47.739Z","dependency_job_id":"08443ba3-ced8-4813-a8f8-9d0649bf0103","html_url":"https://github.com/agentscope-ai/ReMe","commit_stats":null,"previous_names":["modelscope/memoryscope","modelscope/reme","agentscope-ai/reme"],"tags_count":27,"template":false,"template_full_name":null,"purl":"pkg:github/agentscope-ai/ReMe","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/agentscope-ai%2FReMe","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/agentscope-ai%2FReMe/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/agentscope-ai%2FReMe/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/agentscope-ai%2FReMe/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/agentscope-ai","download_url":"https://codeload.github.com/agentscope-ai/ReMe/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/agentscope-ai%2FReMe/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29627664,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-19T18:02:07.722Z","status":"ssl_error","status_checked_at":"2026-02-19T18:01:46.144Z","response_time":117,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agent","ai-agents","memory","memoryscope","rag","reme"],"created_at":"2026-02-10T10:08:15.983Z","updated_at":"2026-02-19T19:00:44.376Z","avatar_url":"https://github.com/agentscope-ai.png","language":"Python","readme":"\u003cp align=\"center\"\u003e\n \u003cimg src=\"docs/_static/figure/reme_logo.png\" alt=\"ReMe Logo\" width=\"50%\"\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://pypi.org/project/reme-ai/\"\u003e\u003cimg src=\"https://img.shields.io/badge/python-3.10+-blue\" alt=\"Python Version\"\u003e\u003c/a\u003e\n  \u003ca href=\"https://pypi.org/project/reme-ai/\"\u003e\u003cimg src=\"https://img.shields.io/pypi/v/reme-ai.svg?logo=pypi\" alt=\"PyPI Version\"\u003e\u003c/a\u003e\n  \u003ca href=\"https://pepy.tech/project/reme-ai/\"\u003e\u003cimg src=\"https://img.shields.io/pypi/dm/reme-ai\" alt=\"PyPI Downloads\"\u003e\u003c/a\u003e\n  \u003ca href=\"https://github.com/agentscope-ai/ReMe\"\u003e\u003cimg src=\"https://img.shields.io/github/commit-activity/m/agentscope-ai/ReMe?style=flat-square\" alt=\"GitHub commit activity\"\u003e\u003c/a\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"./LICENSE\"\u003e\u003cimg src=\"https://img.shields.io/badge/license-Apache--2.0-black\" alt=\"License\"\u003e\u003c/a\u003e\n  \u003ca href=\"./README.md\"\u003e\u003cimg src=\"https://img.shields.io/badge/English-Click-yellow\" alt=\"English\"\u003e\u003c/a\u003e\n  \u003ca href=\"./README_ZH.md\"\u003e\u003cimg src=\"https://img.shields.io/badge/简体中文-点击查看-orange\" alt=\"简体中文\"\u003e\u003c/a\u003e\n  \u003ca href=\"https://github.com/agentscope-ai/ReMe\"\u003e\u003cimg src=\"https://img.shields.io/github/stars/agentscope-ai/ReMe?style=social\" alt=\"GitHub Stars\"\u003e\u003c/a\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003cstrong\u003eMemory Management Kit for Agents, Remember Me, Refine Me.\u003c/strong\u003e\u003cbr\u003e\n  \u003cem\u003e\u003csub\u003eIf you find it useful, please give us a ⭐ Star.\u003c/sub\u003e\u003c/em\u003e\n\u003c/p\u003e\n\n---\n\nReMe is a **modular memory management kit** that provides AI agents with unified memory capabilities—enabling the ability to extract, reuse, and share memories across users, tasks, and agents.\nAgent memory can be viewed as:\n\n```text\nAgent Memory = Long-Term Memory + Short-Term Memory\n             = (Personal + Task + Tool) Memory + (Working Memory)\n```\n\n- **Personal Memory**: Understand user preferences and adapt to context\n- **Task Memory**: Learn from experience and perform better on similar tasks\n- **Tool Memory**: Optimize tool selection and parameter usage based on historical performance\n- **Working Memory**: Manage short-term context for long-running agents without context overflow\n\n---\n\n## 📰 Latest Updates\n\n- **[2026-02]** 💻 ReMeCli: A terminal-based AI chat assistant with built-in memory management. Automatically compacts long conversations into summaries to free up context space, and persists important information as Markdown files for retrieval in future sessions. Memory design inspired by [OpenClaw](https://github.com/openclaw/openclaw).\n  - [Quick Start](docs/cli/quick_start_en.md)\n  - Type `/horse` to trigger the Year of the Horse Easter egg -- fireworks, a galloping horse animation, and a random blessing.\n\u003ctable border=\"0\" cellspacing=\"0\" cellpadding=\"0\" style=\"border: none;\"\u003e\n  \u003ctr style=\"border: none;\"\u003e\n    \u003ctd width=\"10%\" style=\"border: none; vertical-align: middle; text-align: center;\"\u003e\n      \u003cstrong\u003e马\u003cbr\u003e上\u003cbr\u003e有\u003cbr\u003e钱\u003c/strong\u003e\n    \u003c/td\u003e\n    \u003ctd width=\"80%\" style=\"border: none;\"\u003e\n      \u003cvideo src=\"https://github.com/user-attachments/assets/d731ae5c-80eb-498b-a22c-8ab2b9169f87\" autoplay muted loop controls\u003e\u003c/video\u003e\n    \u003c/td\u003e\n    \u003ctd width=\"10%\" style=\"border: none; vertical-align: middle; text-align: center;\"\u003e\n      \u003cstrong\u003e马\u003cbr\u003e到\u003cbr\u003e成\u003cbr\u003e功\u003c/strong\u003e\n    \u003c/td\u003e\n  \u003c/tr\u003e\n\u003c/table\u003e\n\n- **[2025-12]** 📄 Our procedural (task) memory paper has been released on [arXiv](https://arxiv.org/abs/2512.10696)\n- **[2025-11]** 🧠 React-agent with working-memory demo ([Intro](docs/work_memory/message_offload.md)) with ([Quick Start](docs/cookbook/working/quick_start.md)) and ([Code](cookbook/working_memory/work_memory_demo.py))\n- **[2025-10]** 🚀 Direct Python import support: use `from reme_ai import ReMeApp` without HTTP/MCP service\n- **[2025-10]** 🔧 Tool Memory: data-driven tool selection and parameter optimization ([Guide](docs/tool_memory/tool_memory.md))\n- **[2025-09]** 🎉 Async operations support, integrated into agentscope-runtime\n- **[2025-09]** 🎉 Task memory and personal memory integration\n- **[2025-09]** 🧪 Validated effectiveness in appworld, bfcl(v3), and frozenlake ([Experiments](docs/cookbook))\n- **[2025-08]** 🚀 MCP protocol support ([Quick Start](docs/mcp_quick_start.md))\n- **[2025-06]** 🚀 Multiple backend vector storage (Elasticsearch \u0026 ChromaDB) ([Guide](docs/vector_store_api_guide.md))\n- **[2024-09]** 🧠 Personalized and time-aware memory storage\n\n---\n\n## ✨ Architecture Design\n\n\u003cp align=\"center\"\u003e\n \u003cimg src=\"docs/_static/figure/reme_structure.jpg\" alt=\"ReMe Architecture\" width=\"80%\"\u003e\n\u003c/p\u003e\n\nReMe provides a **modular memory management kit** with pluggable components that can be integrated into any agent framework. The system consists of:\n\n#### 🧠 **Task Memory/Experience**\n\nProcedural knowledge reused across agents\n\n- **Success Pattern Recognition**: Identify effective strategies and understand their underlying principles\n- **Failure Analysis Learning**: Learn from mistakes and avoid repeating the same issues\n- **Comparative Patterns**: Different sampling trajectories provide more valuable memories through comparison\n- **Validation Patterns**: Confirm the effectiveness of extracted memories through validation modules\n\nLearn more about how to use task memory from [task memory](docs/task_memory/task_memory.md)\n\n#### 👤 **Personal Memory**\n\nContextualized memory for specific users\n\n- **Individual Preferences**: User habits, preferences, and interaction styles\n- **Contextual Adaptation**: Intelligent memory management based on time and context\n- **Progressive Learning**: Gradually build deep understanding through long-term interaction\n- **Time Awareness**: Time sensitivity in both retrieval and integration\n\nLearn more about how to use personal memory from [personal memory](docs/personal_memory/personal_memory.md)\n\n#### 🔧 **Tool Memory**\n\nData-driven tool selection and usage optimization\n\n- **Historical Performance Tracking**: Success rates, execution times, and token costs from real usage\n- **LLM-as-Judge Evaluation**: Qualitative insights on why tools succeed or fail\n- **Parameter Optimization**: Learn optimal parameter configurations from successful calls\n- **Dynamic Guidelines**: Transform static tool descriptions into living, learned manuals\n\nLearn more about how to use tool memory from [tool memory](docs/tool_memory/tool_memory.md)\n\n#### 🧠 Working Memory\n\nShort‑term contextual memory for long‑running agents via **message offload \u0026 reload**:\n- **Message Offload**: Compact large tool outputs to external files or LLM summaries\n- **Message Reload**: Search (`grep_working_memory`) and read (`read_working_memory`) offloaded content on demand\n📖 **Concept \u0026 API**:\n- Message offload overview: [Message Offload](docs/work_memory/message_offload.md)\n- Offload / reload operators: [Message Offload Ops](docs/work_memory/message_offload_ops.md), [Message Reload Ops](docs/work_memory/message_reload_ops.md)\n💻 **End‑to‑End Demo**:\n- Working memory quick start: [Working Memory Quick Start](docs/cookbook/working/quick_start.md)\n- ReAct agent with working memory: [react_agent_with_working_memory.py](cookbook/working_memory/react_agent_with_working_memory.py)\n- Runnable demo: [work_memory_demo.py](cookbook/working_memory/work_memory_demo.py)\n\n---\n\n## 🛠️ Installation\n\n### Install from PyPI (Recommended)\n\n```bash\npip install reme-ai\n```\n\n### Install from Source\n\n```bash\ngit clone https://github.com/agentscope-ai/ReMe.git\ncd ReMe\npip install .\n```\n\n### Environment Configuration\n\nReMe requires LLM and embedding model configurations. Copy `example.env` to `.env` and configure:\n\n```bash\nFLOW_LLM_API_KEY=sk-xxxx\nFLOW_LLM_BASE_URL=https://xxxx/v1\nFLOW_EMBEDDING_API_KEY=sk-xxxx\nFLOW_EMBEDDING_BASE_URL=https://xxxx/v1\n```\n\n---\n\n## 🚀 Quick Start\n\n### HTTP Service Startup\n\n```bash\nreme \\\n  backend=http \\\n  http.port=8002 \\\n  llm.default.model_name=qwen3-30b-a3b-thinking-2507 \\\n  embedding_model.default.model_name=text-embedding-v4 \\\n  vector_store.default.backend=local\n```\n\n### MCP Server Support\n\n```bash\nreme \\\n  backend=mcp \\\n  mcp.transport=stdio \\\n  llm.default.model_name=qwen3-30b-a3b-thinking-2507 \\\n  embedding_model.default.model_name=text-embedding-v4 \\\n  vector_store.default.backend=local\n```\n\n### Core API Usage\n\n#### Task Memory Management\n\n```python\nimport requests\n\n# Experience Summarizer: Learn from execution trajectories\nresponse = requests.post(\"http://localhost:8002/summary_task_memory\", json={\n    \"workspace_id\": \"task_workspace\",\n    \"trajectories\": [\n        {\"messages\": [{\"role\": \"user\", \"content\": \"Help me create a project plan\"}], \"score\": 1.0}\n    ]\n})\n\n# Retriever: Get relevant memories\nresponse = requests.post(\"http://localhost:8002/retrieve_task_memory\", json={\n    \"workspace_id\": \"task_workspace\",\n    \"query\": \"How to efficiently manage project progress?\",\n    \"top_k\": 1\n})\n```\n\n\u003cdetails\u003e\n\u003csummary\u003ePython import version\u003c/summary\u003e\n\n```python\nimport asyncio\nfrom reme_ai import ReMeApp\n\nasync def main():\n    async with ReMeApp(\n        \"llm.default.model_name=qwen3-30b-a3b-thinking-2507\",\n        \"embedding_model.default.model_name=text-embedding-v4\",\n        \"vector_store.default.backend=memory\"\n    ) as app:\n        # Experience Summarizer: Learn from execution trajectories\n        result = await app.async_execute(\n            name=\"summary_task_memory\",\n            workspace_id=\"task_workspace\",\n            trajectories=[\n                {\n                    \"messages\": [\n                        {\"role\": \"user\", \"content\": \"Help me create a project plan\"}\n                    ],\n                    \"score\": 1.0\n                }\n            ]\n        )\n        print(result)\n\n        # Retriever: Get relevant memories\n        result = await app.async_execute(\n            name=\"retrieve_task_memory\",\n            workspace_id=\"task_workspace\",\n            query=\"How to efficiently manage project progress?\",\n            top_k=1\n        )\n        print(result)\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003ecurl version\u003c/summary\u003e\n\n```bash\n# Experience Summarizer: Learn from execution trajectories\ncurl -X POST http://localhost:8002/summary_task_memory \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"workspace_id\": \"task_workspace\",\n    \"trajectories\": [\n      {\"messages\": [{\"role\": \"user\", \"content\": \"Help me create a project plan\"}], \"score\": 1.0}\n    ]\n  }'\n\n# Retriever: Get relevant memories\ncurl -X POST http://localhost:8002/retrieve_task_memory \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"workspace_id\": \"task_workspace\",\n    \"query\": \"How to efficiently manage project progress?\",\n    \"top_k\": 1\n  }'\n```\n\n\u003c/details\u003e\n\n#### Personal Memory Management\n\n```python\n# Memory Integration: Learn from user interactions\nresponse = requests.post(\"http://localhost:8002/summary_personal_memory\", json={\n    \"workspace_id\": \"task_workspace\",\n    \"trajectories\": [\n        {\"messages\":\n            [\n                {\"role\": \"user\", \"content\": \"I like to drink coffee while working in the morning\"},\n                {\"role\": \"assistant\",\n                 \"content\": \"I understand, you prefer to start your workday with coffee to stay energized\"}\n            ]\n        }\n    ]\n})\n\n# Memory Retrieval: Get personal memory fragments\nresponse = requests.post(\"http://localhost:8002/retrieve_personal_memory\", json={\n    \"workspace_id\": \"task_workspace\",\n    \"query\": \"What are the user's work habits?\",\n    \"top_k\": 5\n})\n```\n\n\u003cdetails\u003e\n\u003csummary\u003ePython import version\u003c/summary\u003e\n\n```python\nimport asyncio\nfrom reme_ai import ReMeApp\n\nasync def main():\n    async with ReMeApp(\n        \"llm.default.model_name=qwen3-30b-a3b-thinking-2507\",\n        \"embedding_model.default.model_name=text-embedding-v4\",\n        \"vector_store.default.backend=memory\"\n    ) as app:\n        # Memory Integration: Learn from user interactions\n        result = await app.async_execute(\n            name=\"summary_personal_memory\",\n            workspace_id=\"task_workspace\",\n            trajectories=[\n                {\n                    \"messages\": [\n                        {\"role\": \"user\", \"content\": \"I like to drink coffee while working in the morning\"},\n                        {\"role\": \"assistant\",\n                         \"content\": \"I understand, you prefer to start your workday with coffee to stay energized\"}\n                    ]\n                }\n            ]\n        )\n        print(result)\n\n        # Memory Retrieval: Get personal memory fragments\n        result = await app.async_execute(\n            name=\"retrieve_personal_memory\",\n            workspace_id=\"task_workspace\",\n            query=\"What are the user's work habits?\",\n            top_k=5\n        )\n        print(result)\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003ecurl version\u003c/summary\u003e\n\n```bash\n# Memory Integration: Learn from user interactions\ncurl -X POST http://localhost:8002/summary_personal_memory \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"workspace_id\": \"task_workspace\",\n    \"trajectories\": [\n      {\"messages\": [\n        {\"role\": \"user\", \"content\": \"I like to drink coffee while working in the morning\"},\n        {\"role\": \"assistant\", \"content\": \"I understand, you prefer to start your workday with coffee to stay energized\"}\n      ]}\n    ]\n  }'\n\n# Memory Retrieval: Get personal memory fragments\ncurl -X POST http://localhost:8002/retrieve_personal_memory \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"workspace_id\": \"task_workspace\",\n    \"query\": \"What are the user'\\''s work habits?\",\n    \"top_k\": 5\n  }'\n```\n\n\u003c/details\u003e\n\n#### Tool Memory Management\n\n```python\nimport requests\n\n# Record tool execution results\nresponse = requests.post(\"http://localhost:8002/add_tool_call_result\", json={\n    \"workspace_id\": \"tool_workspace\",\n    \"tool_call_results\": [\n        {\n            \"create_time\": \"2025-10-21 10:30:00\",\n            \"tool_name\": \"web_search\",\n            \"input\": {\"query\": \"Python asyncio tutorial\", \"max_results\": 10},\n            \"output\": \"Found 10 relevant results...\",\n            \"token_cost\": 150,\n            \"success\": True,\n            \"time_cost\": 2.3\n        }\n    ]\n})\n\n# Generate usage guidelines from history\nresponse = requests.post(\"http://localhost:8002/summary_tool_memory\", json={\n    \"workspace_id\": \"tool_workspace\",\n    \"tool_names\": \"web_search\"\n})\n\n# Retrieve tool guidelines before use\nresponse = requests.post(\"http://localhost:8002/retrieve_tool_memory\", json={\n    \"workspace_id\": \"tool_workspace\",\n    \"tool_names\": \"web_search\"\n})\n```\n\n\u003cdetails\u003e\n\u003csummary\u003ePython import version\u003c/summary\u003e\n\n```python\nimport asyncio\nfrom reme_ai import ReMeApp\n\nasync def main():\n    async with ReMeApp(\n        \"llm.default.model_name=qwen3-30b-a3b-thinking-2507\",\n        \"embedding_model.default.model_name=text-embedding-v4\",\n        \"vector_store.default.backend=memory\"\n    ) as app:\n        # Record tool execution results\n        result = await app.async_execute(\n            name=\"add_tool_call_result\",\n            workspace_id=\"tool_workspace\",\n            tool_call_results=[\n                {\n                    \"create_time\": \"2025-10-21 10:30:00\",\n                    \"tool_name\": \"web_search\",\n                    \"input\": {\"query\": \"Python asyncio tutorial\", \"max_results\": 10},\n                    \"output\": \"Found 10 relevant results...\",\n                    \"token_cost\": 150,\n                    \"success\": True,\n                    \"time_cost\": 2.3\n                }\n            ]\n        )\n        print(result)\n\n        # Generate usage guidelines from history\n        result = await app.async_execute(\n            name=\"summary_tool_memory\",\n            workspace_id=\"tool_workspace\",\n            tool_names=\"web_search\"\n        )\n        print(result)\n\n        # Retrieve tool guidelines before use\n        result = await app.async_execute(\n            name=\"retrieve_tool_memory\",\n            workspace_id=\"tool_workspace\",\n            tool_names=\"web_search\"\n        )\n        print(result)\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003ecurl version\u003c/summary\u003e\n\n```bash\n# Record tool execution results\ncurl -X POST http://localhost:8002/add_tool_call_result \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"workspace_id\": \"tool_workspace\",\n    \"tool_call_results\": [\n      {\n        \"create_time\": \"2025-10-21 10:30:00\",\n        \"tool_name\": \"web_search\",\n        \"input\": {\"query\": \"Python asyncio tutorial\", \"max_results\": 10},\n        \"output\": \"Found 10 relevant results...\",\n        \"token_cost\": 150,\n        \"success\": true,\n        \"time_cost\": 2.3\n      }\n    ]\n  }'\n\n# Generate usage guidelines from history\ncurl -X POST http://localhost:8002/summary_tool_memory \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"workspace_id\": \"tool_workspace\",\n    \"tool_names\": \"web_search\"\n  }'\n\n# Retrieve tool guidelines before use\ncurl -X POST http://localhost:8002/retrieve_tool_memory \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"workspace_id\": \"tool_workspace\",\n    \"tool_names\": \"web_search\"\n  }'\n```\n\n\u003c/details\u003e\n\n#### Working Memory Management\n\n```python\nimport requests\n\n# Summarize and compact working memory for a long-running conversation\nresponse = requests.post(\"http://localhost:8002/summary_working_memory\", json={\n    \"messages\": [\n        {\n            \"role\": \"system\",\n            \"content\": \"You are a helpful assistant. First use `Grep` to find the line numbers that match the keywords or regular expressions, and then use `ReadFile` to read the code around those locations. If no matches are found, never give up; try different parameters, such as searching with only part of the keywords. After `Grep`, use the `ReadFile` command to view content starting from a specified `offset` and `limit`, and do not exceed 100 lines. If the current content is insufficient, you can continue trying different `offset` and `limit` values with the `ReadFile` command.\"\n        },\n        {\n            \"role\": \"user\",\n            \"content\": \"搜索下reme项目的的README内容\"\n        },\n        {\n            \"role\": \"assistant\",\n            \"content\": \"\",\n            \"tool_calls\": [\n                {\n                    \"index\": 0,\n                    \"id\": \"call_6596dafa2a6a46f7a217da\",\n                    \"function\": {\n                        \"arguments\": \"{\\\"query\\\": \\\"readme\\\"}\",\n                        \"name\": \"web_search\"\n                    },\n                    \"type\": \"function\"\n                }\n            ]\n        },\n        {\n            \"role\": \"tool\",\n            \"content\": \"ultra large context , over 50000 tokens......\"\n        },\n        {\n            \"role\": \"user\",\n            \"content\": \"根据readme回答task memory在appworld的效果是多少，需要具体的数值\"\n        }\n    ],\n    \"working_summary_mode\": \"auto\",\n    \"compact_ratio_threshold\": 0.75,\n    \"max_total_tokens\": 20000,\n    \"max_tool_message_tokens\": 2000,\n    \"group_token_threshold\": 4000,\n    \"keep_recent_count\": 2,\n    \"store_dir\": \"test_working_memory\",\n    \"chat_id\": \"demo_chat_id\"\n})\n```\n\n\u003cdetails\u003e\n\u003csummary\u003ePython import version\u003c/summary\u003e\n\n```python\nimport asyncio\nfrom reme_ai import ReMeApp\n\n\nasync def main():\n    async with ReMeApp(\n        \"llm.default.model_name=qwen3-30b-a3b-thinking-2507\",\n        \"embedding_model.default.model_name=text-embedding-v4\",\n        \"vector_store.default.backend=memory\"\n    ) as app:\n        # Summarize and compact working memory for a long-running conversation\n        result = await app.async_execute(\n            name=\"summary_working_memory\",\n            messages=[\n                {\n                    \"role\": \"system\",\n                    \"content\": \"You are a helpful assistant. First use `Grep` to find the line numbers that match the keywords or regular expressions, and then use `ReadFile` to read the code around those locations. If no matches are found, never give up; try different parameters, such as searching with only part of the keywords. After `Grep`, use the `ReadFile` command to view content starting from a specified `offset` and `limit`, and do not exceed 100 lines. If the current content is insufficient, you can continue trying different `offset` and `limit` values with the `ReadFile` command.\"\n                },\n                {\n                    \"role\": \"user\",\n                    \"content\": \"搜索下reme项目的的README内容\"\n                },\n                {\n                    \"role\": \"assistant\",\n                    \"content\": \"\",\n                    \"tool_calls\": [\n                        {\n                            \"index\": 0,\n                            \"id\": \"call_6596dafa2a6a46f7a217da\",\n                            \"function\": {\n                                \"arguments\": \"{\\\"query\\\": \\\"readme\\\"}\",\n                                \"name\": \"web_search\"\n                            },\n                            \"type\": \"function\"\n                        }\n                    ]\n                },\n                {\n                    \"role\": \"tool\",\n                    \"content\": \"ultra large context , over 50000 tokens......\"\n                },\n                {\n                    \"role\": \"user\",\n                    \"content\": \"根据readme回答task memory在appworld的效果是多少，需要具体的数值\"\n                }\n            ],\n            working_summary_mode=\"auto\",\n            compact_ratio_threshold=0.75,\n            max_total_tokens=20000,\n            max_tool_message_tokens=2000,\n            group_token_threshold=4000,\n            keep_recent_count=2,\n            store_dir=\"test_working_memory\",\n            chat_id=\"demo_chat_id\",\n        )\n        print(result)\n\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003ecurl version\u003c/summary\u003e\n\n```bash\ncurl -X POST http://localhost:8002/summary_working_memory \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"messages\": [\n      {\n        \"role\": \"system\",\n        \"content\": \"You are a helpful assistant. First use `Grep` to find the line numbers that match the keywords or regular expressions, and then use `ReadFile` to read the code around those locations. If no matches are found, never give up; try different parameters, such as searching with only part of the keywords. After `Grep`, use the `ReadFile` command to view content starting from a specified `offset` and `limit`, and do not exceed 100 lines. If the current content is insufficient, you can continue trying different `offset` and `limit` values with the `ReadFile` command.\"\n      },\n      {\n        \"role\": \"user\",\n        \"content\": \"搜索下reme项目的的README内容\"\n      },\n      {\n        \"role\": \"assistant\",\n        \"content\": \"\",\n        \"tool_calls\": [\n          {\n            \"index\": 0,\n            \"id\": \"call_6596dafa2a6a46f7a217da\",\n            \"function\": {\n              \"arguments\": \"{\\\"query\\\": \\\"readme\\\"}\",\n              \"name\": \"web_search\"\n            },\n            \"type\": \"function\"\n          }\n        ]\n      },\n      {\n        \"role\": \"tool\",\n        \"content\": \"ultra large context , over 50000 tokens......\"\n      },\n      {\n        \"role\": \"user\",\n        \"content\": \"根据readme回答task memory在appworld的效果是多少，需要具体的数值\"\n      }\n    ],\n    \"working_summary_mode\": \"auto\",\n    \"compact_ratio_threshold\": 0.75,\n    \"max_total_tokens\": 20000,\n    \"max_tool_message_tokens\": 2000,\n    \"group_token_threshold\": 4000,\n    \"keep_recent_count\": 2,\n    \"store_dir\": \"test_working_memory\",\n    \"chat_id\": \"demo_chat_id\"\n  }'\n```\n\n\u003c/details\u003e\n\n---\n\n## 📦 Pre-built Memory Library\n\nReMe provides a **memory library** with pre-extracted, production-ready memories that agents can load and use immediately:\n\n### Available Memory Packs\n\n| Memory Pack          | Domain         | Size          | Description                                                                         |\n|----------------------|----------------|---------------|-------------------------------------------------------------------------------------|\n| **`appworld.jsonl`** | Task Execution | ~100 memories | Complex task planning patterns, multi-step workflows, and error recovery strategies |\n| **`bfcl_v3.jsonl`**  | Tool Usage     | ~150 memories | Function calling patterns, parameter optimization, and tool selection strategies    |\n\n### Loading Pre-built Memories\n\n```python\n# Load pre-built memories\nresponse = requests.post(\"http://localhost:8002/vector_store\", json={\n    \"workspace_id\": \"appworld\",\n    \"action\": \"load\",\n    \"path\": \"./docs/library/\"\n})\n\n# Query relevant memories\nresponse = requests.post(\"http://localhost:8002/retrieve_task_memory\", json={\n    \"workspace_id\": \"appworld\",\n    \"query\": \"How to navigate to settings and update user profile?\",\n    \"top_k\": 1\n})\n```\n\n\u003cdetails\u003e\n\u003csummary\u003ePython import version\u003c/summary\u003e\n\n```python\nimport asyncio\nfrom reme_ai import ReMeApp\n\nasync def main():\n    async with ReMeApp(\n        \"llm.default.model_name=qwen3-30b-a3b-thinking-2507\",\n        \"embedding_model.default.model_name=text-embedding-v4\",\n        \"vector_store.default.backend=memory\"\n    ) as app:\n        # Load pre-built memories\n        result = await app.async_execute(\n            name=\"vector_store\",\n            workspace_id=\"appworld\",\n            action=\"load\",\n            path=\"./docs/library/\"\n        )\n        print(result)\n\n        # Query relevant memories\n        result = await app.async_execute(\n            name=\"retrieve_task_memory\",\n            workspace_id=\"appworld\",\n            query=\"How to navigate to settings and update user profile?\",\n            top_k=1\n        )\n        print(result)\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n\u003c/details\u003e\n\n## 🧪 Experiments\n\n### 🌍 [Appworld Experiment](docs/cookbook/appworld/quickstart.md)\n\nWe tested ReMe on Appworld using Qwen3-8B (non-thinking mode):\n\n| Method       | Avg@4               | Pass@4              |\n|--------------|---------------------|---------------------|\n| without ReMe | 0.1497              | 0.3285              |\n| with ReMe    | 0.1706 **(+2.09%)** | 0.3631 **(+3.46%)** |\n\nPass@K measures the probability that at least one of the K generated samples successfully completes the task (\nscore=1).\nThe current experiment uses an internal AppWorld environment, which may have slight differences.\n\nYou can find more details on reproducing the experiment in [quickstart.md](docs/cookbook/appworld/quickstart.md).\n\n### 🔧 [BFCL-V3 Experiment](docs/cookbook/bfcl/quickstart.md)\n\nWe tested ReMe on BFCL-V3 multi-turn-base (randomly split 50train/150val) using Qwen3-8B (thinking mode):\n\n| Method       | Avg@4               | Pass@4              |\n|--------------|---------------------|---------------------|\n| without ReMe | 0.4033              | 0.5955              |\n| with ReMe    | 0.4450 **(+4.17%)** | 0.6577 **(+6.22%)** |\n\n### 🧊 [Frozenlake Experiment](docs/cookbook/frozenlake/quickstart.md)\n\n|                                             without ReMe                                             |                                              with ReMe                                               |\n|:----------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------:|\n| \u003cp align=\"center\"\u003e\u003cimg src=\"docs/_static/figure/frozenlake_failure.gif\" alt=\"GIF 1\" width=\"30%\"\u003e\u003c/p\u003e | \u003cp align=\"center\"\u003e\u003cimg src=\"docs/_static/figure/frozenlake_success.gif\" alt=\"GIF 2\" width=\"30%\"\u003e\u003c/p\u003e |\n\nWe tested on 100 random frozenlake maps using qwen3-8b:\n\n| Method       | pass rate        |\n|--------------|------------------|\n| without ReMe | 0.66             |\n| with ReMe    | 0.72 **(+6.0%)** |\n\nYou can find more details on reproducing the experiment in [quickstart.md](docs/cookbook/frozenlake/quickstart.md).\n\n### 🛠️ [Tool Memory Benchmark](docs/tool_memory/tool_bench.md)\n\nWe evaluated Tool Memory effectiveness using a controlled benchmark with three mock search tools using Qwen3-30B-Instruct:\n\n| Scenario               | Avg Score | Improvement |\n|------------------------|-----------|-------------|\n| Train (No Memory)      | 0.650     | -           |\n| Test (No Memory)       | 0.672     | Baseline    |\n| **Test (With Memory)** | **0.772** | **+14.88%** |\n\n**Key Findings:**\n- Tool Memory enables data-driven tool selection based on historical performance\n- Success rates improved by ~15% with learned parameter configurations\n\nYou can find more details in [tool_bench.md](docs/tool_memory/tool_bench.md) and the implementation at [run_reme_tool_bench.py](cookbook/tool_memory/run_reme_tool_bench.py).\n\n## 📚 Resources\n\n### Getting Started\n- **[Quick Start](./cookbook/simple_demo)**: Practical examples for immediate use\n  - [Tool Memory Demo](cookbook/simple_demo/use_tool_memory_demo.py): Complete lifecycle demonstration of tool memory\n  - [Tool Memory Benchmark](cookbook/tool_memory/run_reme_tool_bench.py): Evaluate tool memory effectiveness\n\n### Integration Guides\n- **[Direct Python Import](docs/cookbook/working/quick_start.md)**: Embed ReMe directly into your agent code\n- **[HTTP Service API](docs/vector_store_api_guide.md)**: RESTful API for multi-agent systems\n- **[MCP Protocol](docs/mcp_quick_start.md)**: Integration with Claude Desktop and MCP-compatible clients\n\n### Memory System Configuration\n- **[Personal Memory](docs/personal_memory)**: User preference learning and contextual adaptation\n- **[Task Memory](docs/task_memory)**: Procedural knowledge extraction and reuse\n- **[Tool Memory](docs/tool_memory)**: Data-driven tool selection and optimization\n- **[Working Memory](docs/work_memory/message_offload.md)**: Short-term context management for long-running agents\n\n### Advanced Topics\n- **[Operator Pipelines](reme_ai/config/default.yaml)**: Customize memory processing workflows by modifying operator chains\n- **[Vector Store Backends](docs/vector_store_api_guide.md)**: Configure local, Elasticsearch, Qdrant, or ChromaDB storage\n- **[Example Collection](./cookbook)**: Real-world use cases and best practices\n\n---\n\n## ⭐ Support \u0026 Community\n\n- **Star \u0026 Watch**: Stars surface ReMe to more agent builders; watching keeps you updated on new releases.\n- **Share your wins**: Open an issue or discussion with what ReMe unlocked for your agents—we love showcasing community builds.\n- **Need a feature?** File a request and we’ll help shape it together.\n\n---\n\n## 🤝 Contribution\n\nWe believe the best memory systems come from collective wisdom. Contributions welcome 👉[Guide](docs/contribution.md):\n\n### Code Contributions\n\n- **New Operators**: Develop custom memory processing operators (retrieval, summarization, etc.)\n- **Backend Implementations**: Add support for new vector stores or LLM providers\n- **Memory Services**: Extend with new memory types or capabilities\n- **API Enhancements**: Improve existing endpoints or add new ones\n\n### Documentation Improvements\n\n- **Integration Examples**: Show how to integrate ReMe with different agent frameworks\n- **Operator Tutorials**: Document custom operator development\n- **Best Practice Guides**: Share effective memory management patterns\n- **Use Case Studies**: Demonstrate ReMe in real-world applications\n\n\n---\n\n## 📄 Citation\n\n```bibtex\n@software{AgentscopeReMe2025,\n  title = {AgentscopeReMe: Memory Management Kit for Agents},\n  author = {Li Yu and\n            Jiaji Deng and\n            Zouying Cao and\n            Weikang Zhou and\n            Tiancheng Qin and\n            Qingxu Fu and\n            Sen Huang and\n            Xianzhe Xu and\n            Zhaoyang Liu and\n            Boyin Liu},\n  url = {https://reme.agentscope.io},\n  year = {2025}\n}\n\n@misc{AgentscopeReMe2025Paper,\n  title={Remember Me, Refine Me: A Dynamic Procedural Memory Framework for Experience-Driven Agent Evolution},\n  author={Zouying Cao and\n          Jiaji Deng and\n          Li Yu and\n          Weikang Zhou and\n          Zhaoyang Liu and\n          Bolin Ding and\n          Hai Zhao},\n  year={2025},\n  eprint={2512.10696},\n  archivePrefix={arXiv},\n  primaryClass={cs.AI},\n  url={https://arxiv.org/abs/2512.10696},\n}\n```\n\n---\n\n## ⚖️ License\n\nThis project is licensed under the Apache License 2.0 - see the [LICENSE](./LICENSE) file for details.\n\n---\n\n## Star History\n\n[![Star History Chart](https://api.star-history.com/svg?repos=agentscope-ai/ReMe\u0026type=Date)](https://www.star-history.com/#agentscope-ai/ReMe\u0026Date)\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fagentscope-ai%2Freme","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fagentscope-ai%2Freme","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fagentscope-ai%2Freme/lists"}