{"id":31684529,"url":"https://github.com/simplemindedbot/stm-research","last_synced_at":"2025-10-08T08:11:25.863Z","repository":{"id":318527763,"uuid":"1071675771","full_name":"simplemindedbot/stm-research","owner":"simplemindedbot","description":"Short-Term Memory (STM) MCP Server with temporal decay, reinforcement learning, and Long-Term Memory (LTM) integration. Novel algorithm mimics human memory dynamics.","archived":false,"fork":false,"pushed_at":"2025-10-07T17:54:22.000Z","size":125,"stargazers_count":0,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2025-10-07T19:14:24.979Z","etag":null,"topics":["ai","cognitive-science","ebbinghaus","git-friendly","jsonl","knowledge-graph","llm","mcp","mcp-server","memory","obsidian","python","reinforcement-learning","temporal-decay"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/simplemindedbot.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-10-07T16:59:26.000Z","updated_at":"2025-10-07T17:54:25.000Z","dependencies_parsed_at":"2025-10-07T19:24:39.522Z","dependency_job_id":null,"html_url":"https://github.com/simplemindedbot/stm-research","commit_stats":null,"previous_names":["simplemindedbot/stm-research"],"tags_count":1,"template":false,"template_full_name":null,"purl":"pkg:github/simplemindedbot/stm-research","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/simplemindedbot%2Fstm-research","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/simplemindedbot%2Fstm-research/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/simplemindedbot%2Fstm-research/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/simplemindedbot%2Fstm-research/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/simplemindedbot","download_url":"https://codeload.github.com/simplemindedbot/stm-research/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/simplemindedbot%2Fstm-research/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":278909743,"owners_count":26066897,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-08T02:00:06.501Z","response_time":56,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","cognitive-science","ebbinghaus","git-friendly","jsonl","knowledge-graph","llm","mcp","mcp-server","memory","obsidian","python","reinforcement-learning","temporal-decay"],"created_at":"2025-10-08T08:09:36.941Z","updated_at":"2025-10-08T08:11:25.857Z","avatar_url":"https://github.com/simplemindedbot.png","language":"Python","readme":"# STM Research: Short-Term Memory with Temporal Decay\n\nA Model Context Protocol (MCP) server providing **human-like memory dynamics** for AI assistants. Memories naturally fade over time unless reinforced through use, mimicking the [Ebbinghaus forgetting curve](https://en.wikipedia.org/wiki/Forgetting_curve).\n\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n[![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/)\n\n\u003e **📖 New to this project?** Start with the [ELI5 Guide](ELI5.md) for a simple explanation of what this does and how to use it.\n\n## Overview\n\nThis repository contains research, design, and a complete implementation of a short-term memory system that combines:\n\n- **Novel temporal decay algorithm** based on cognitive science\n- **Reinforcement learning** through usage patterns\n- **Two-layer architecture** (STM + LTM) for working and permanent memory\n- **Smart prompting patterns** for natural LLM integration\n- **Git-friendly storage** with human-readable JSONL\n- **Knowledge graph** with entities and relations\n\n## Core Algorithm\n\nThe temporal decay scoring function:\n\n$$\n\\text{score}(t) = (n_{\\text{use}})^\\beta \\cdot e^{-\\lambda \\cdot \\Delta t} \\cdot s\n$$\n\nWhere:\n- $n_{\\text{use}}$ - Use count (number of accesses)\n- $\\beta$ (beta) - Sub-linear use count weighting (default: 0.6)\n- $\\lambda = \\frac{\\ln(2)}{t_{1/2}}$ (lambda) - Decay constant; set via half-life (default: 3-day)\n- $\\Delta t$ - Time since last access (seconds)\n- $s$ - Strength parameter $\\in [0, 2]$ (importance multiplier)\n\nThresholds:\n- $\\tau_{\\text{forget}}$ (default 0.05) — if score \u003c this, forget\n- $\\tau_{\\text{promote}}$ (default 0.65) — if score ≥ this, promote (or if $n_{\\text{use}}\\ge5$ in 14 days)\n\nDecay Models:\n- Power‑Law (default): heavier tail; most human‑like retention\n- Exponential: lighter tail; forgets sooner\n- Two‑Component: fast early forgetting + heavier tail\n\nSee detailed parameter reference, model selection, and worked examples in docs/scoring_algorithm.md.\n\n## Tuning Cheat Sheet\n\n- Balanced (default)\n  - Half-life: 3 days (λ ≈ 2.67e-6)\n  - β = 0.6, τ_forget = 0.05, τ_promote = 0.65, use_count≥5 in 14d\n  - Strength: 1.0 (bump to 1.3–2.0 for critical)\n- High‑velocity context (ephemeral notes, rapid switching)\n  - Half-life: 12–24 hours (λ ≈ 1.60e-5 to 8.02e-6)\n  - β = 0.8–0.9, τ_forget = 0.10–0.15, τ_promote = 0.70–0.75\n- Long retention (research/archival)\n  - Half-life: 7–14 days (λ ≈ 1.15e-6 to 5.73e-7)\n  - β = 0.3–0.5, τ_forget = 0.02–0.05, τ_promote = 0.50–0.60\n- Preference/decision heavy assistants\n  - Half-life: 3–7 days; β = 0.6–0.8\n  - Strength defaults: 1.3–1.5 for preferences; 1.8–2.0 for decisions\n- Aggressive space control\n  - Raise τ_forget to 0.08–0.12 and/or shorten half-life; schedule weekly GC\n- Environment template\n  - STM_DECAY_LAMBDA=2.673e-6, STM_DECAY_BETA=0.6\n  - STM_FORGET_THRESHOLD=0.05, STM_PROMOTE_THRESHOLD=0.65\n  - STM_PROMOTE_USE_COUNT=5, STM_PROMOTE_TIME_WINDOW=14\n\n**Decision thresholds:**\n- Forget: $\\text{score} \u003c 0.05$ → delete memory\n- Promote: $\\text{score} \\geq 0.65$ OR $n_{\\text{use}} \\geq 5$ within 14 days → move to LTM\n\n## Key Innovations\n\n### 1. Temporal Decay with Reinforcement\n\nUnlike traditional caching (TTL, LRU), memories are scored continuously based on:\n- **Recency** - Exponential decay over time\n- **Frequency** - Use count with sub-linear weighting\n- **Importance** - Adjustable strength parameter\n\nThis creates memory dynamics that closely mimic human cognition.\n\n### 2. Smart Prompting System\n\nPatterns for making AI assistants use memory naturally:\n\n**Auto-Save**\n```\nUser: \"I prefer TypeScript over JavaScript\"\n→ Automatically saved with tags: [preferences, typescript, programming]\n```\n\n**Auto-Recall**\n```\nUser: \"Can you help with another TypeScript project?\"\n→ Automatically retrieves preferences and conventions\n```\n\n**Auto-Reinforce**\n```\nUser: \"Yes, still using TypeScript\"\n→ Memory strength increased, decay slowed\n```\n\nNo explicit memory commands needed - just natural conversation.\n\n### 3. Two-Layer Architecture\n\n```\n┌─────────────────────────────────────┐\n│   STM (Short-Term Memory)           │\n│   - JSONL storage                   │\n│   - Temporal decay                  │\n│   - Hours to weeks retention        │\n└──────────────┬──────────────────────┘\n               │ Automatic promotion\n               ↓\n┌─────────────────────────────────────┐\n│   LTM (Long-Term Memory)            │\n│   - Markdown files (Obsidian)       │\n│   - Permanent storage               │\n│   - Git version control             │\n└─────────────────────────────────────┘\n```\n\n## Project Structure\n\n```\nstm-research/\n├── README.md                          # This file\n├── CLAUDE.md                          # Guide for AI assistants\n├── src/stm_server/\n│   ├── core/                          # Decay, scoring, clustering\n│   ├── storage/                       # JSONL and LTM index\n│   ├── tools/                         # 10 MCP tools\n│   ├── backup/                        # Git integration\n│   └── vault/                         # Obsidian integration\n├── docs/\n│   ├── scoring_algorithm.md           # Mathematical details\n│   ├── prompts/                       # Smart prompting patterns\n│   ├── architecture.md                # System design\n│   └── api.md                         # Tool reference\n├── tests/                             # Test suite\n├── examples/                          # Usage examples\n└── pyproject.toml                     # Project configuration\n```\n\n## Quick Start\n\n### Installation\n\n```bash\n# Install with uv (recommended)\nuv pip install -e .\n\n# Or with pip\npip install -e .\n```\n\n### Configuration\n\nCopy `.env.example` to `.env` and configure:\n\n```bash\n# Storage\nSTM_STORAGE_PATH=~/.stm/jsonl\n\n# Decay model (power_law | exponential | two_component)\nSTM_DECAY_MODEL=power_law\n\n# Power-law parameters (default model)\nSTM_PL_ALPHA=1.1\nSTM_PL_HALFLIFE_DAYS=3.0\n\n# Exponential (if selected)\n# STM_DECAY_LAMBDA=2.673e-6  # 3-day half-life\n\n# Two-component (if selected)\n# STM_TC_LAMBDA_FAST=1.603e-5  # ~12h\n# STM_TC_LAMBDA_SLOW=1.147e-6  # ~7d\n# STM_TC_WEIGHT_FAST=0.7\n\n# Common parameters\nSTM_DECAY_LAMBDA=2.673e-6\nSTM_DECAY_BETA=0.6\n\n# Thresholds\nSTM_FORGET_THRESHOLD=0.05\nSTM_PROMOTE_THRESHOLD=0.65\n\n# Long-term memory (optional)\nLTM_VAULT_PATH=~/Documents/Obsidian/Vault\n```\n\n### MCP Configuration\n\nAdd to your Claude Desktop config (`~/Library/Application Support/Claude/claude_desktop_config.json`):\n\n```json\n{\n  \"mcpServers\": {\n    \"stm\": {\n      \"command\": \"uv\",\n      \"args\": [\n        \"--directory\",\n        \"/path/to/stm-research\",\n        \"run\",\n        \"stm-server\"\n      ]\n    }\n  }\n}\n```\n\n**Note:** Storage paths are configured in your `.env` file, not in the MCP config. The server reads all configuration from `.env` automatically.\n\n### Maintenance\n\nUse the maintenance CLI to inspect and compact JSONL storage:\n\n```bash\n# Show storage stats (active counts, file sizes, compaction hints)\nstm-maintenance stats\n\n# Compact JSONL (rewrite without tombstones/duplicates)\nstm-maintenance compact\n```\n\n## CLI Commands\n\nThe server includes 7 command-line tools:\n\n```bash\nstm-server           # Run MCP server\nstm-index-ltm        # Index Obsidian vault\nstm-backup           # Git backup operations\nstm-vault            # Vault markdown operations\nstm-search           # Unified STM+LTM search\nstm-maintenance      # JSONL storage stats and compaction\n```\n\n## MCP Tools\n\n10 tools for AI assistants to manage memories:\n\n| Tool | Purpose |\n|------|---------|\n| `save_memory` | Save new memory with tags, entities |\n| `search_memory` | Search with filters and scoring |\n| `search_unified` | Unified search across STM + LTM |\n| `touch_memory` | Reinforce memory (boost strength) |\n| `gc` | Garbage collect low-scoring memories |\n| `promote_memory` | Move to long-term storage |\n| `cluster_memories` | Find similar memories |\n| `consolidate_memories` | Merge duplicates (LLM-driven) |\n| `read_graph` | Get entire knowledge graph |\n| `open_memories` | Retrieve specific memories |\n| `create_relation` | Link memories explicitly |\n\n### Example: Unified Search\n\nSearch across STM and LTM with the CLI:\n\n```bash\nstm-search \"typescript preferences\" --tags preferences --limit 5 --verbose\n```\n\n### Example: Reinforce (Touch) Memory\n\nBoost a memory's recency/use count to slow decay:\n\n```json\n{\n  \"memory_id\": \"mem-123\",\n  \"boost_strength\": true\n}\n```\n\nSample response:\n\n```json\n{\n  \"success\": true,\n  \"memory_id\": \"mem-123\",\n  \"old_score\": 0.41,\n  \"new_score\": 0.78,\n  \"use_count\": 5,\n  \"strength\": 1.1\n}\n```\n\n### Example: Promote Memory\n\nSuggest and promote high-value memories to the Obsidian vault.\n\nAuto-detect (dry run):\n\n```json\n{\n  \"auto_detect\": true,\n  \"dry_run\": true\n}\n```\n\nPromote a specific memory:\n\n```json\n{\n  \"memory_id\": \"mem-123\",\n  \"dry_run\": false,\n  \"target\": \"obsidian\"\n}\n```\n\nAs an MCP tool (request body):\n\n```json\n{\n  \"query\": \"typescript preferences\",\n  \"tags\": [\"preferences\"],\n  \"limit\": 5,\n  \"verbose\": true\n}\n```\n\n## Mathematical Details\n\n### Decay Curves\n\nFor a memory with $n_{\\text{use}}=1$, $s=1.0$, and $\\lambda = 2.673 \\times 10^{-6}$ (3-day half-life):\n\n| Time | Score | Status |\n|------|-------|--------|\n| 0 hours | 1.000 | Fresh |\n| 12 hours | 0.917 | Active |\n| 1 day | 0.841 | Active |\n| 3 days | 0.500 | Half-life |\n| 7 days | 0.210 | Decaying |\n| 14 days | 0.044 | Near forget |\n| 30 days | 0.001 | **Forgotten** |\n\n### Use Count Impact\n\nWith $\\beta = 0.6$ (sub-linear weighting):\n\n| Use Count | Boost Factor |\n|-----------|--------------|\n| 1 | 1.0× |\n| 5 | 2.6× |\n| 10 | 4.0× |\n| 50 | 11.4× |\n\nFrequent access significantly extends retention.\n\n## Documentation\n\n- **[Scoring Algorithm](docs/scoring_algorithm.md)** - Complete mathematical model with LaTeX formulas\n- **[Smart Prompting](docs/prompts/memory_system_prompt.md)** - Patterns for natural LLM integration\n- **[Architecture](docs/architecture.md)** - System design and implementation\n- **[API Reference](docs/api.md)** - MCP tool documentation\n- **[Graph Features](docs/graph_features.md)** - Knowledge graph usage\n\n## Use Cases\n\n### Personal Assistant (Balanced)\n- 3-day half-life\n- Remember preferences and decisions\n- Auto-promote frequently referenced information\n\n### Development Environment (Aggressive)\n- 1-day half-life\n- Fast context switching\n- Aggressive forgetting of old context\n\n### Research / Archival (Conservative)\n- 14-day half-life\n- Long retention\n- Comprehensive knowledge preservation\n\n## License\n\nMIT License - See [LICENSE](LICENSE) for details.\n\nClean-room implementation. No AGPL dependencies.\n\n## Related Work\n\n- [Model Context Protocol](https://github.com/modelcontextprotocol) - MCP specification\n- [Ebbinghaus Forgetting Curve](https://en.wikipedia.org/wiki/Forgetting_curve) - Cognitive science foundation\n- Research inspired by: Memoripy, Titan MCP, MemoryBank\n\n## Citation\n\nIf you use this work in research, please cite:\n\n```bibtex\n@software{stm_research_2025,\n  title = {STM Research: Short-Term Memory with Temporal Decay},\n  author = {simplemindedbot},\n  year = {2025},\n  url = {https://github.com/simplemindedbot/stm-research},\n  version = {0.2.0}\n}\n```\n\n## Contributing\n\nThis is a research project. Contributions welcome! Please:\n\n1. Read the [Architecture docs](docs/architecture.md)\n2. Understand the [Scoring Algorithm](docs/scoring_algorithm.md)\n3. Follow existing code patterns\n4. Add tests for new features\n5. Update documentation\n\n## Status\n\n**Version:** 0.3.0\n**Status:** Research implementation - functional but evolving\n\n### Phase 1 (Complete) ✅\n- 10 MCP tools\n- Temporal decay algorithm\n \n- Knowledge graph\n\n### Phase 2 (Complete) ✅\n- JSONL storage\n- LTM index\n- Git integration\n- Smart prompting documentation\n- Maintenance CLI\n\n### Future Work\n- Spaced repetition optimization\n- Adaptive decay parameters\n- Enhanced clustering algorithms\n- Performance benchmarks\n\n---\n\n**Built with** [Claude Code](https://claude.com/claude-code) 🤖\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsimplemindedbot%2Fstm-research","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsimplemindedbot%2Fstm-research","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsimplemindedbot%2Fstm-research/lists"}