https://github.com/memodb-io/acontext
Agent Skills as a Memory Layer
https://github.com/memodb-io/acontext
agent agent-development-kit agent-observability ai-agent anthropic context-data-platform context-engineering data-platform llm llm-observability llmops memory openai self-evolving self-learning
Last synced: 3 days ago
JSON representation
Agent Skills as a Memory Layer
- Host: GitHub
- URL: https://github.com/memodb-io/acontext
- Owner: memodb-io
- License: apache-2.0
- Created: 2025-07-16T13:15:48.000Z (9 months ago)
- Default Branch: main
- Last Pushed: 2026-03-04T16:17:46.000Z (about 1 month ago)
- Last Synced: 2026-03-04T18:04:58.762Z (about 1 month ago)
- Topics: agent, agent-development-kit, agent-observability, ai-agent, anthropic, context-data-platform, context-engineering, data-platform, llm, llm-observability, llmops, memory, openai, self-evolving, self-learning
- Language: TypeScript
- Homepage: https://acontext.io
- Size: 23.6 MB
- Stars: 3,120
- Watchers: 26
- Forks: 293
- Open Issues: 17
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
- Codeowners: .github/CODEOWNERS
- Roadmap: ROADMAP.md
- Agents: AGENTS.md
Awesome Lists containing this project
- awesome-ChatGPT-repositories - Acontext - β 1.3k / One Place for Agents to Store, Observe, and Learn. Context Data Platform for Self-learning Agents, designed to simplify context engineering and improve agent reliability and task success rates. (NLP)
README
## What is Acontext?
Acontext is an open-source skill memory layer for AI agents. It **automatically** captures learnings from agent runs and stores them as **agent skill files** β files you can read, edit, and share across agents, LLMs, and frameworks.
If you want the agent you build to **learn from its mistakes** and **reuse what worked** β without opaque memory polluting your context β give Acontext a try.
## Skill is All You Need
Agent memory is getting increasingly complicatedπ€’ β hard to understand, hard to debug, and hard for users to inspect or correct. Acontext takes a different approach: if agent skills can represent every piece of knowledge an agent needs as simple files, so can the memory.
- **Acontext builds memory in the agent skills format**, so everyone can see and understand what the memory actually contains.
- **Skill is Memory, Memory is Skill**. Whether a skill comes from one you downloaded from Clawhub or one you created yourself, Acontext can follow it and evolve it over time.
## The Philosophy of Acontext
- **Plain file, any framework** β Skill memories are Markdown files. Use them with LangGraph, Claude, AI SDK, or anything that reads files. No embeddings, no API lock-in. Git, grep, and mount to the sandbox.
- **You design the structure** β Attach more skills to define the schema, naming, and file layout of the memory. For example: one file per contact, one per project by uploading a working context skill.
- **Progressive disclosure, not search** β The agent can use `get_skill` and `get_skill_file` to fetch what it needs. Retrieval is by tool use and reasoning, not semantic top-k.
- **Download as ZIP, reuse anywhere** β Export skill files as ZIP. Run locally, in another agent, or with another LLM. No vendor lock-in; no re-embedding or migration step.
## How It Works
### Store β How skills get memorized?
```mermaid
flowchart LR
A[Session messages] --> C[Task complete/failed]
C --> D[Distillation]
D --> E[Skill Agent]
E --> F[Update Skills]
```
- **Session messages** β Conversation (and optionally tool calls, artifacts) is the raw input. Tasks are extracted from the message stream automatically (or inferred from explicit outcome reporting).
- **Task complete or failed** β When a task is marked done or failed (e.g. by agent report or automatic detection), that outcome is the trigger for learning.
- **Distillation** β An LLM pass infers from the conversation and execution trace what worked, what failed, and user preferences.
- **Skill Agent** β Decides where to store (existing skill or new) and writes according to your `SKILL.md` schema.
- **Update Skills** β Skills are updated. You define the structure in `SKILL.md`; the system does extraction, routing, and writing.
### Recall β How the agent uses skills on the next run
```mermaid
flowchart LR
E[Any Agent] --> F[list_skills/get_skill]
F --> G[Appear in context]
```
Give your agent **Skill Content Tools** (`get_skill`, `get_skill_file`). The agent decides what it needs, calls the tools, and gets the skill content. No embedding search β **progressive disclosure, agent in the loop**.
# πͺ Use It to Improve your Agent
Claude Code:
```text
Read https://acontext.io/SKILL.md and follow the instructions to install and configure Acontext for Claude Code
```
OpenClaw:
```text
Read https://acontext.io/SKILL.md and follow the instructions to install and configure Acontext for OpenClaw
```
# π Step-by-step Quickstart
### Connect to Acontext
1. Go to [Acontext.io](https://acontext.io), claim your free credits.
2. Go through a one-click onboarding to get your API Key (starts with `sk-ac`)
π» Self-host Acontext
We have an `acontext-cli` to help you do a quick proof-of-concept. Download it first in your terminal:
```bash
curl -fsSL https://install.acontext.io | sh
```
You should have [docker](https://www.docker.com/get-started/) installed and an OpenAI API Key to start an Acontext backend on your computer:
```bash
mkdir acontext_server && cd acontext_server
acontext server up
```
> Make sure your LLM has the ability to [call tools](https://platform.openai.com/docs/guides/function-calling). By default, Acontext will use `gpt-4.1`.
`acontext server up` will create/use `.env` and `config.yaml` for Acontext, and create a `db` folder to persist data.
Once it's done, you can access the following endpoints:
- Acontext API Base URL: http://localhost:8029/api/v1
- Acontext Dashboard: http://localhost:3000/
### Install SDKs
We're maintaining Python [](https://pypi.org/project/acontext/) and Typescript [](https://www.npmjs.com/package/@acontext/acontext) SDKs. The snippets below are using Python.
> Click the doc link to see TS SDK Quickstart.
```bash
pip install acontext
```
### Initialize Client
```python
import os
from acontext import AcontextClient
# For cloud:
client = AcontextClient(
api_key=os.getenv("ACONTEXT_API_KEY"),
)
# For self-hosted:
client = AcontextClient(
base_url="http://localhost:8029/api/v1",
api_key="sk-ac-your-root-api-bearer-token",
)
```
### Skill Memory in Action
Create a learning space, attach a session, and let the agent learn β skills are written as Markdown files automatically.
```python
from acontext import AcontextClient
client = AcontextClient(api_key="sk-ac-...")
# Create a learning space and attach a session
space = client.learning_spaces.create()
session = client.sessions.create()
client.learning_spaces.learn(space.id, session_id=session.id)
# Run your agent, store messages β when tasks complete, learning runs automatically
client.sessions.store_message(session.id, blob={"role": "user", "content": "My name is Gus"})
client.sessions.store_message(session.id, blob={"role": "assistant", "content": "Hi Gus! How can I help you today?"})
# ... agent runs ...
# List learned skills (Markdown files)
client.learning_spaces.wait_for_learning(space.id, session_id=session.id)
skills = client.learning_spaces.list_skills(space.id)
# Download all skill files to a local directory
for skill in skills:
client.skills.download(skill_id=skill.id, path=f"./skills/{skill.name}")
```
> `wait_for_learning` is a blocking helper for demo purposes. In production, task extraction and learning run in the background automatically β your agent never waits.
### More Features
- **[Context Engineering](https://docs.acontext.io/engineering/editing)** β Compress context with summaries and edit strategies
- **[Disk](https://docs.acontext.io/store/disk)** β Virtual, persistent filesystem for agents
- **[Sandbox](https://docs.acontext.io/store/sandbox)** β Isolated code execution with bash, Python, and [mountable skills](https://docs.acontext.io/tool/bash_tools#mounting-skills-in-sandbox)
- **[Agent Tools](https://docs.acontext.io/tool/whatis)** β Disk tools, sandbox tools, and skill tools for LLM function calling
# π§ Use Acontext to Build Agents
Download end-to-end scripts with `acontext`:
**Python**
```bash
acontext create my-proj --template-path "python/openai-basic"
```
More examples on Python:
- `python/openai-agent-basic`: openai agent sdk template
- `python/openai-agent-artifacts`: agent can edit and download artifacts
- `python/claude-agent-sdk`: claude agent sdk with `ClaudeAgentStorage`
- `python/agno-basic`: agno framework template
- `python/smolagents-basic`: smolagents (huggingface) template
- `python/interactive-agent-skill`: interactive sandbox with mountable agent skills
**Typescript**
```bash
acontext create my-proj --template-path "typescript/openai-basic"
```
More examples on Typescript:
- `typescript/vercel-ai-basic`: agent in @vercel/ai-sdk
- `typescript/claude-agent-sdk`: claude agent sdk with `ClaudeAgentStorage`
- `typescript/interactive-agent-skill`: interactive sandbox with mountable agent skills
> [!NOTE]
>
> Check our example repo for more templates: [Acontext-Examples](https://github.com/memodb-io/Acontext-Examples).
>
> We're cooking more full-stack Agent Applications! [Tell us what you want!](https://discord.acontext.io)
# π Documentation
To learn more about skill memory and what Acontext can do, visit [our docs](https://docs.acontext.io/) or start with [What is Skill Memory?](https://docs.acontext.io/learn/quick)
# β€οΈ Stay Updated
Star Acontext on GitHub to support us and receive instant notifications.

# ποΈ Architecture
click to open
```mermaid
graph TB
subgraph "Client Layer"
PY["pip install acontext"]
TS["npm i @acontext/acontext"]
end
subgraph "Acontext Backend"
subgraph " "
API["API
localhost:8029"]
CORE["Core"]
API -->|FastAPI & MQ| CORE
end
subgraph " "
Infrastructure["Infrastructures"]
PG["PostgreSQL"]
S3["S3"]
REDIS["Redis"]
MQ["RabbitMQ"]
end
end
subgraph "Dashboard"
UI["Web Dashboard
localhost:3000"]
end
PY -->|RESTFUL API| API
TS -->|RESTFUL API| API
UI -->|RESTFUL API| API
API --> Infrastructure
CORE --> Infrastructure
Infrastructure --> PG
Infrastructure --> S3
Infrastructure --> REDIS
Infrastructure --> MQ
style PY fill:#3776ab,stroke:#fff,stroke-width:2px,color:#fff
style TS fill:#3178c6,stroke:#fff,stroke-width:2px,color:#fff
style API fill:#00add8,stroke:#fff,stroke-width:2px,color:#fff
style CORE fill:#ffd43b,stroke:#333,stroke-width:2px,color:#333
style UI fill:#000,stroke:#fff,stroke-width:2px,color:#fff
style PG fill:#336791,stroke:#fff,stroke-width:2px,color:#fff
style S3 fill:#ff9900,stroke:#fff,stroke-width:2px,color:#fff
style REDIS fill:#dc382d,stroke:#fff,stroke-width:2px,color:#fff
style MQ fill:#ff6600,stroke:#fff,stroke-width:2px,color:#fff
```
# π€ Stay Together
Join the community for support and discussions:
- [Discuss with Builders on Acontext Discord](https://discord.acontext.io) π»
- [Follow Acontext on X](https://x.com/acontext_io) π
# π Contributing
- Check our [roadmap.md](./ROADMAP.md) first.
- Read [contributing.md](./CONTRIBUTING.md)
# π₯ Badges
 
```md
[](https://acontext.io)
[](https://acontext.io)
```
# π LICENSE
This project is currently licensed under [Apache License 2.0](LICENSE).