https://github.com/simplemindedbot/stm-research
Short-Term Memory (STM) MCP Server with temporal decay, reinforcement learning, and Long-Term Memory (LTM) integration. Novel algorithm mimics human memory dynamics.
https://github.com/simplemindedbot/stm-research
ai cognitive-science ebbinghaus git-friendly jsonl knowledge-graph llm mcp mcp-server memory obsidian python reinforcement-learning temporal-decay
Last synced: 6 months ago
JSON representation
Short-Term Memory (STM) MCP Server with temporal decay, reinforcement learning, and Long-Term Memory (LTM) integration. Novel algorithm mimics human memory dynamics.
- Host: GitHub
- URL: https://github.com/simplemindedbot/stm-research
- Owner: simplemindedbot
- Created: 2025-10-07T16:59:26.000Z (6 months ago)
- Default Branch: main
- Last Pushed: 2025-10-07T17:54:22.000Z (6 months ago)
- Last Synced: 2025-10-07T19:14:24.979Z (6 months ago)
- Topics: ai, cognitive-science, ebbinghaus, git-friendly, jsonl, knowledge-graph, llm, mcp, mcp-server, memory, obsidian, python, reinforcement-learning, temporal-decay
- Language: Python
- Homepage:
- Size: 122 KB
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# STM Research: Short-Term Memory with Temporal Decay
A Model Context Protocol (MCP) server providing **human-like memory dynamics** for AI assistants. Memories naturally fade over time unless reinforced through use, mimicking the [Ebbinghaus forgetting curve](https://en.wikipedia.org/wiki/Forgetting_curve).
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
> **π New to this project?** Start with the [ELI5 Guide](ELI5.md) for a simple explanation of what this does and how to use it.
## Overview
This repository contains research, design, and a complete implementation of a short-term memory system that combines:
- **Novel temporal decay algorithm** based on cognitive science
- **Reinforcement learning** through usage patterns
- **Two-layer architecture** (STM + LTM) for working and permanent memory
- **Smart prompting patterns** for natural LLM integration
- **Git-friendly storage** with human-readable JSONL
- **Knowledge graph** with entities and relations
## Core Algorithm
The temporal decay scoring function:
$$
\text{score}(t) = (n_{\text{use}})^\beta \cdot e^{-\lambda \cdot \Delta t} \cdot s
$$
Where:
- $n_{\text{use}}$ - Use count (number of accesses)
- $\beta$ (beta) - Sub-linear use count weighting (default: 0.6)
- $\lambda = \frac{\ln(2)}{t_{1/2}}$ (lambda) - Decay constant; set via half-life (default: 3-day)
- $\Delta t$ - Time since last access (seconds)
- $s$ - Strength parameter $\in [0, 2]$ (importance multiplier)
Thresholds:
- $\tau_{\text{forget}}$ (default 0.05) β if score < this, forget
- $\tau_{\text{promote}}$ (default 0.65) β if score β₯ this, promote (or if $n_{\text{use}}\ge5$ in 14 days)
Decay Models:
- PowerβLaw (default): heavier tail; most humanβlike retention
- Exponential: lighter tail; forgets sooner
- TwoβComponent: fast early forgetting + heavier tail
See detailed parameter reference, model selection, and worked examples in docs/scoring_algorithm.md.
## Tuning Cheat Sheet
- Balanced (default)
- Half-life: 3 days (Ξ» β 2.67e-6)
- Ξ² = 0.6, Ο_forget = 0.05, Ο_promote = 0.65, use_countβ₯5 in 14d
- Strength: 1.0 (bump to 1.3β2.0 for critical)
- Highβvelocity context (ephemeral notes, rapid switching)
- Half-life: 12β24 hours (Ξ» β 1.60e-5 to 8.02e-6)
- Ξ² = 0.8β0.9, Ο_forget = 0.10β0.15, Ο_promote = 0.70β0.75
- Long retention (research/archival)
- Half-life: 7β14 days (Ξ» β 1.15e-6 to 5.73e-7)
- Ξ² = 0.3β0.5, Ο_forget = 0.02β0.05, Ο_promote = 0.50β0.60
- Preference/decision heavy assistants
- Half-life: 3β7 days; Ξ² = 0.6β0.8
- Strength defaults: 1.3β1.5 for preferences; 1.8β2.0 for decisions
- Aggressive space control
- Raise Ο_forget to 0.08β0.12 and/or shorten half-life; schedule weekly GC
- Environment template
- STM_DECAY_LAMBDA=2.673e-6, STM_DECAY_BETA=0.6
- STM_FORGET_THRESHOLD=0.05, STM_PROMOTE_THRESHOLD=0.65
- STM_PROMOTE_USE_COUNT=5, STM_PROMOTE_TIME_WINDOW=14
**Decision thresholds:**
- Forget: $\text{score} < 0.05$ β delete memory
- Promote: $\text{score} \geq 0.65$ OR $n_{\text{use}} \geq 5$ within 14 days β move to LTM
## Key Innovations
### 1. Temporal Decay with Reinforcement
Unlike traditional caching (TTL, LRU), memories are scored continuously based on:
- **Recency** - Exponential decay over time
- **Frequency** - Use count with sub-linear weighting
- **Importance** - Adjustable strength parameter
This creates memory dynamics that closely mimic human cognition.
### 2. Smart Prompting System
Patterns for making AI assistants use memory naturally:
**Auto-Save**
```
User: "I prefer TypeScript over JavaScript"
β Automatically saved with tags: [preferences, typescript, programming]
```
**Auto-Recall**
```
User: "Can you help with another TypeScript project?"
β Automatically retrieves preferences and conventions
```
**Auto-Reinforce**
```
User: "Yes, still using TypeScript"
β Memory strength increased, decay slowed
```
No explicit memory commands needed - just natural conversation.
### 3. Two-Layer Architecture
```
βββββββββββββββββββββββββββββββββββββββ
β STM (Short-Term Memory) β
β - JSONL storage β
β - Temporal decay β
β - Hours to weeks retention β
ββββββββββββββββ¬βββββββββββββββββββββββ
β Automatic promotion
β
βββββββββββββββββββββββββββββββββββββββ
β LTM (Long-Term Memory) β
β - Markdown files (Obsidian) β
β - Permanent storage β
β - Git version control β
βββββββββββββββββββββββββββββββββββββββ
```
## Project Structure
```
stm-research/
βββ README.md # This file
βββ CLAUDE.md # Guide for AI assistants
βββ src/stm_server/
β βββ core/ # Decay, scoring, clustering
β βββ storage/ # JSONL and LTM index
β βββ tools/ # 10 MCP tools
β βββ backup/ # Git integration
β βββ vault/ # Obsidian integration
βββ docs/
β βββ scoring_algorithm.md # Mathematical details
β βββ prompts/ # Smart prompting patterns
β βββ architecture.md # System design
β βββ api.md # Tool reference
βββ tests/ # Test suite
βββ examples/ # Usage examples
βββ pyproject.toml # Project configuration
```
## Quick Start
### Installation
```bash
# Install with uv (recommended)
uv pip install -e .
# Or with pip
pip install -e .
```
### Configuration
Copy `.env.example` to `.env` and configure:
```bash
# Storage
STM_STORAGE_PATH=~/.stm/jsonl
# Decay model (power_law | exponential | two_component)
STM_DECAY_MODEL=power_law
# Power-law parameters (default model)
STM_PL_ALPHA=1.1
STM_PL_HALFLIFE_DAYS=3.0
# Exponential (if selected)
# STM_DECAY_LAMBDA=2.673e-6 # 3-day half-life
# Two-component (if selected)
# STM_TC_LAMBDA_FAST=1.603e-5 # ~12h
# STM_TC_LAMBDA_SLOW=1.147e-6 # ~7d
# STM_TC_WEIGHT_FAST=0.7
# Common parameters
STM_DECAY_LAMBDA=2.673e-6
STM_DECAY_BETA=0.6
# Thresholds
STM_FORGET_THRESHOLD=0.05
STM_PROMOTE_THRESHOLD=0.65
# Long-term memory (optional)
LTM_VAULT_PATH=~/Documents/Obsidian/Vault
```
### MCP Configuration
Add to your Claude Desktop config (`~/Library/Application Support/Claude/claude_desktop_config.json`):
```json
{
"mcpServers": {
"stm": {
"command": "uv",
"args": [
"--directory",
"/path/to/stm-research",
"run",
"stm-server"
]
}
}
}
```
**Note:** Storage paths are configured in your `.env` file, not in the MCP config. The server reads all configuration from `.env` automatically.
### Maintenance
Use the maintenance CLI to inspect and compact JSONL storage:
```bash
# Show storage stats (active counts, file sizes, compaction hints)
stm-maintenance stats
# Compact JSONL (rewrite without tombstones/duplicates)
stm-maintenance compact
```
## CLI Commands
The server includes 7 command-line tools:
```bash
stm-server # Run MCP server
stm-index-ltm # Index Obsidian vault
stm-backup # Git backup operations
stm-vault # Vault markdown operations
stm-search # Unified STM+LTM search
stm-maintenance # JSONL storage stats and compaction
```
## MCP Tools
10 tools for AI assistants to manage memories:
| Tool | Purpose |
|------|---------|
| `save_memory` | Save new memory with tags, entities |
| `search_memory` | Search with filters and scoring |
| `search_unified` | Unified search across STM + LTM |
| `touch_memory` | Reinforce memory (boost strength) |
| `gc` | Garbage collect low-scoring memories |
| `promote_memory` | Move to long-term storage |
| `cluster_memories` | Find similar memories |
| `consolidate_memories` | Merge duplicates (LLM-driven) |
| `read_graph` | Get entire knowledge graph |
| `open_memories` | Retrieve specific memories |
| `create_relation` | Link memories explicitly |
### Example: Unified Search
Search across STM and LTM with the CLI:
```bash
stm-search "typescript preferences" --tags preferences --limit 5 --verbose
```
### Example: Reinforce (Touch) Memory
Boost a memory's recency/use count to slow decay:
```json
{
"memory_id": "mem-123",
"boost_strength": true
}
```
Sample response:
```json
{
"success": true,
"memory_id": "mem-123",
"old_score": 0.41,
"new_score": 0.78,
"use_count": 5,
"strength": 1.1
}
```
### Example: Promote Memory
Suggest and promote high-value memories to the Obsidian vault.
Auto-detect (dry run):
```json
{
"auto_detect": true,
"dry_run": true
}
```
Promote a specific memory:
```json
{
"memory_id": "mem-123",
"dry_run": false,
"target": "obsidian"
}
```
As an MCP tool (request body):
```json
{
"query": "typescript preferences",
"tags": ["preferences"],
"limit": 5,
"verbose": true
}
```
## Mathematical Details
### Decay Curves
For a memory with $n_{\text{use}}=1$, $s=1.0$, and $\lambda = 2.673 \times 10^{-6}$ (3-day half-life):
| Time | Score | Status |
|------|-------|--------|
| 0 hours | 1.000 | Fresh |
| 12 hours | 0.917 | Active |
| 1 day | 0.841 | Active |
| 3 days | 0.500 | Half-life |
| 7 days | 0.210 | Decaying |
| 14 days | 0.044 | Near forget |
| 30 days | 0.001 | **Forgotten** |
### Use Count Impact
With $\beta = 0.6$ (sub-linear weighting):
| Use Count | Boost Factor |
|-----------|--------------|
| 1 | 1.0Γ |
| 5 | 2.6Γ |
| 10 | 4.0Γ |
| 50 | 11.4Γ |
Frequent access significantly extends retention.
## Documentation
- **[Scoring Algorithm](docs/scoring_algorithm.md)** - Complete mathematical model with LaTeX formulas
- **[Smart Prompting](docs/prompts/memory_system_prompt.md)** - Patterns for natural LLM integration
- **[Architecture](docs/architecture.md)** - System design and implementation
- **[API Reference](docs/api.md)** - MCP tool documentation
- **[Graph Features](docs/graph_features.md)** - Knowledge graph usage
## Use Cases
### Personal Assistant (Balanced)
- 3-day half-life
- Remember preferences and decisions
- Auto-promote frequently referenced information
### Development Environment (Aggressive)
- 1-day half-life
- Fast context switching
- Aggressive forgetting of old context
### Research / Archival (Conservative)
- 14-day half-life
- Long retention
- Comprehensive knowledge preservation
## License
MIT License - See [LICENSE](LICENSE) for details.
Clean-room implementation. No AGPL dependencies.
## Related Work
- [Model Context Protocol](https://github.com/modelcontextprotocol) - MCP specification
- [Ebbinghaus Forgetting Curve](https://en.wikipedia.org/wiki/Forgetting_curve) - Cognitive science foundation
- Research inspired by: Memoripy, Titan MCP, MemoryBank
## Citation
If you use this work in research, please cite:
```bibtex
@software{stm_research_2025,
title = {STM Research: Short-Term Memory with Temporal Decay},
author = {simplemindedbot},
year = {2025},
url = {https://github.com/simplemindedbot/stm-research},
version = {0.2.0}
}
```
## Contributing
This is a research project. Contributions welcome! Please:
1. Read the [Architecture docs](docs/architecture.md)
2. Understand the [Scoring Algorithm](docs/scoring_algorithm.md)
3. Follow existing code patterns
4. Add tests for new features
5. Update documentation
## Status
**Version:** 0.3.0
**Status:** Research implementation - functional but evolving
### Phase 1 (Complete) β
- 10 MCP tools
- Temporal decay algorithm
- Knowledge graph
### Phase 2 (Complete) β
- JSONL storage
- LTM index
- Git integration
- Smart prompting documentation
- Maintenance CLI
### Future Work
- Spaced repetition optimization
- Adaptive decay parameters
- Enhanced clustering algorithms
- Performance benchmarks
---
**Built with** [Claude Code](https://claude.com/claude-code) π€