An open API service indexing awesome lists of open source software.

https://github.com/1jehuang/jcode

A resource-efficient, open source AI coding agent with a native TUI, built in Rust.
https://github.com/1jehuang/jcode

ai claude cli coding-agent llm mcp openai rust terminal tui

Last synced: 7 days ago
JSON representation

A resource-efficient, open source AI coding agent with a native TUI, built in Rust.

Awesome Lists containing this project

README

          

# jcode

[![CI](https://github.com/1jehuang/jcode/actions/workflows/ci.yml/badge.svg)](https://github.com/1jehuang/jcode/actions/workflows/ci.yml)
[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
[![Built with Rust](https://img.shields.io/badge/Built%20with-Rust-orange.svg)](https://www.rust-lang.org/)

A blazing-fast, fully autonomous AI coding agent with a gorgeous TUI,
multi-model support, swarm coordination, persistent memory, and 30+ built-in tools -
all running natively in your terminal.


jcode demo


[Features](#features) · [Install](#installation) · [Usage](#usage) · [Architecture](#architecture) · [Tools](#tools)

---

## Features

| Feature | Description |
|---|---|
| **Blazing Fast TUI** | Sub-millisecond rendering at 1,400+ FPS. No flicker. No lag. Ever. |
| **Multi-Provider** | Claude, OpenAI, GitHub Copilot, OpenRouter - 200+ models, switch on the fly |
| **No API Keys Needed** | Works with your Claude Max, ChatGPT Pro, or GitHub Copilot subscription via OAuth |
| **Persistent Memory** | Learns about you and your codebase across sessions |
| **Swarm Mode** | Multiple agents coordinate in the same repo with conflict detection |
| **30+ Built-in Tools** | File ops, search, web, shell, memory, sub-agents, parallel execution |
| **MCP Support** | Extend with any Model Context Protocol server |
| **Server / Client** | Daemon mode with multi-client attach, session persistence |
| **Sub-Agents** | Delegate tasks to specialized child agents |
| **Self-Updating** | Built-in self-dev mode with hot-reload and canary deploys |
| **Featherweight** | ~28 MB idle client, single native binary - no runtime, no VM, no Electron |
| **OpenClaw** | Always-on ambient agent — gardens memory, does proactive work, responds via Telegram |

---

## Performance & Resource Efficiency

*A single native binary. No Node.js. No Electron. No Python. Just Rust.*

jcode is engineered to be absurdly efficient. While other coding agents spin up
Electron windows, Node.js runtimes, and multi-hundred-MB processes, jcode runs
as a single compiled binary that sips resources.

| Metric | jcode | Typical AI IDE / Agent |
|---|---|---|
| **Idle client memory** | **~28 MB** | 300–800 MB |
| **Server memory** | **~40 MB** (base) | N/A (monolithic) |
| **Active session** | **~50–65 MB** | 500 MB+ |
| **Frame render time** | **0.67 ms** (1,400+ FPS) | 16 ms (60 FPS, if lucky) |
| **Startup time** | **Instant** | 3–10 seconds |
| **CPU at idle** | **~0.3%** | 2–5% |
| **Runtime dependencies** | **None** | Node.js, Python, Electron, … |
| **Binary** | **Single 66 MB executable** | Hundreds of MB + package managers |

> **Real-world proof:** Right now on the dev machine there are **10+ jcode sessions**
> running simultaneously - clients, servers, sub-agents - all totaling less memory
> than a single Electron app window.

The secret is Rust. No garbage collector pausing your UI. No JS event loop
bottleneck. No interpreted overhead. Just zero-cost abstractions compiled
to native code with `jemalloc` for memory-efficient long-running sessions.

---

## Installation

### Quick Install

```bash
# macOS & Linux
curl -fsSL https://raw.githubusercontent.com/1jehuang/jcode/master/scripts/install.sh | bash
```

```powershell
# Windows (PowerShell)
irm https://raw.githubusercontent.com/1jehuang/jcode/master/scripts/install.ps1 | iex
```

### macOS via Homebrew

```bash
brew tap 1jehuang/jcode
brew install jcode
```

### From Source (all platforms)

```bash
git clone https://github.com/1jehuang/jcode.git
cd jcode
cargo build --release
```

Then symlink to your PATH:

```bash
scripts/install_release.sh
```

### Prerequisites

You need at least one of:

| Provider | Setup |
|---|---|
| **Claude** (recommended) | Run `/login claude` inside jcode (opens browser for OAuth) |
| **GitHub Copilot** | Run `/login copilot` inside jcode (GitHub device flow) |
| **OpenAI** | Run `/login openai` inside jcode (opens browser for OAuth) |
| **Google Gemini** | Run `/login gemini` inside jcode (native Google OAuth for Code Assist) |
| **Azure OpenAI** | Run `jcode login --provider azure` (Microsoft Entra ID or API key) |
| **Alibaba Cloud Coding Plan** | Run `jcode login --provider alibaba-coding-plan` (Alibaba Cloud Bailian API key) |
| **OpenRouter** | Set `OPENROUTER_API_KEY=sk-or-v1-...` |
| **Direct API Key** | Set `ANTHROPIC_API_KEY=sk-ant-...` |

### Platform Support

| Platform | Status |
|---|---|
| **Linux** x86_64 / aarch64 | Fully supported |
| **macOS** Apple Silicon & Intel | Supported |
| **Windows** x86_64 | Supported (native + WSL2) |

---

## Usage

```bash
# Launch the TUI (default - connects to server or starts one)
jcode

# Run a single command non-interactively
jcode run "Create a hello world program in Python"

# Start as background server
jcode serve

# Connect additional clients to the running server
jcode connect

# Specify provider
jcode --provider claude
jcode --provider copilot
jcode --provider openai
jcode --provider openrouter
jcode --provider azure
jcode --provider alibaba-coding-plan

# Change working directory
jcode -C /path/to/project

# Resume a previous session by memorable name
jcode --resume fox
```

---

## Tools

30+ tools available out of the box - and extensible via MCP.

| Category | Tools | Description |
|---|---|---|
| **File Ops** | `read` `write` `edit` `multiedit` `patch` `apply_patch` | Read, write, and surgically edit files |
| **Search** | `glob` `grep` `ls` `codesearch` | Find files, search contents, navigate code |
| **Execution** | `bash` `task` `batch` `bg` | Shell commands, sub-agents, parallel & background execution |
| **Web** | `webfetch` `websearch` | Fetch URLs, search the web via DuckDuckGo |
| **Memory** | `memory` `session_search` `conversation_search` | Persistent cross-session memory and RAG retrieval |
| **Coordination** | `communicate` `todo_read` `todo_write` | Inter-agent messaging, task tracking |
| **Meta** | `mcp` `skill` `selfdev` | MCP servers, skill loading, self-development |

---

## Architecture

High-Level Overview


```mermaid
graph TB
CLI["CLI (main.rs)
jcode [serve|connect|run|...]"]

CLI --> TUI["TUI
app.rs / ui.rs"]
CLI --> Server["Server
Unix Socket"]
CLI --> Standalone["Standalone
Agent Loop"]

Server --> Agent["Agent
agent.rs"]
TUI <-->|events| Server

Agent --> Provider["Provider
Claude / Copilot / OpenAI / OpenRouter / Azure OpenAI"]
Agent --> Registry["Tool Registry
30+ tools"]
Agent --> Session["Session
Persistence"]

style CLI fill:#f97316,color:#fff
style Agent fill:#8b5cf6,color:#fff
style Provider fill:#3b82f6,color:#fff
style Registry fill:#10b981,color:#fff
style TUI fill:#ec4899,color:#fff
style Server fill:#6366f1,color:#fff
```

**Data Flow:**
1. User input enters via TUI or CLI
2. Server routes requests to the appropriate Agent session
3. Agent sends messages to Provider, receives streaming response
4. Tool calls are executed via the Registry
5. Session state is persisted to `~/.jcode/sessions/`

Provider System


```mermaid
graph TB
MP["MultiProvider
Detects credentials, allows runtime switching"]

MP --> Claude["ClaudeProvider
provider/claude.rs"]
MP --> Copilot["CopilotProvider
provider/copilot.rs"]
MP --> OpenAI["OpenAIProvider
provider/openai.rs"]
MP --> OR["OpenRouterProvider
provider/openrouter.rs"]
MP --> Azure["Azure OpenAI
provider/openrouter.rs + auth/azure.rs"]

Claude --> ClaudeCreds["~/.claude/.credentials.json
OAuth (Claude Max)"]
Claude --> APIKey["ANTHROPIC_API_KEY
Direct API"]
Copilot --> GHCreds["~/.config/github-copilot/
OAuth (Copilot Pro/Free)"]
OpenAI --> CodexCreds["~/.codex/auth.json
OAuth (ChatGPT Pro)"]
OR --> ORKey["OPENROUTER_API_KEY
200+ models"]
Azure --> AzureCreds["Azure CLI / Managed Identity / API key
Entra ID or direct key"]

style MP fill:#8b5cf6,color:#fff
style Claude fill:#d97706,color:#fff
style Copilot fill:#6366f1,color:#fff
style OpenAI fill:#10b981,color:#fff
style OR fill:#3b82f6,color:#fff
style Azure fill:#0ea5e9,color:#fff
```

**Key Design:**
- `MultiProvider` detects available credentials at startup
- Seamless runtime switching between providers with `/model` command
- Claude direct API with OAuth - no API key needed with a subscription
- GitHub Copilot OAuth - access Claude, GPT, Gemini, and more through your Copilot subscription
- Azure OpenAI supports either Microsoft Entra ID credentials or an `api-key` header
- OpenRouter gives access to 200+ models from all major providers

Tool System


```mermaid
graph TB
Registry["Tool Registry
Arc<RwLock<HashMap<String, Arc<dyn Tool>>>>"]

Registry --> FileTools["File Tools
read · write · edit
multiedit · patch"]
Registry --> SearchTools["Search & Nav
glob · grep · ls
codesearch"]
Registry --> ExecTools["Execution
bash · task · batch · bg"]
Registry --> WebTools["Web
webfetch · websearch"]
Registry --> MemTools["Memory & RAG
memory · session_search
conversation_search"]
Registry --> MetaTools["Meta & Control
todo · skill · communicate
mcp · selfdev"]
Registry --> MCPTools["MCP Tools
Dynamically registered
from external servers
"]

style Registry fill:#10b981,color:#fff
style FileTools fill:#3b82f6,color:#fff
style SearchTools fill:#6366f1,color:#fff
style ExecTools fill:#f97316,color:#fff
style WebTools fill:#ec4899,color:#fff
style MemTools fill:#8b5cf6,color:#fff
style MetaTools fill:#d97706,color:#fff
style MCPTools fill:#64748b,color:#fff
```

**Tool Trait:**
```rust
#[async_trait]
trait Tool: Send + Sync {
fn name(&self) -> &str;
fn description(&self) -> &str;
fn parameters_schema(&self) -> Value;
async fn execute(&self, input: Value, ctx: ToolContext) -> Result;
}
```

Server & Swarm Coordination


```mermaid
graph TB
Server["Server
/run/user/{uid}/jcode.sock"]

Server --> C1["Client 1
TUI"]
Server --> C2["Client 2
TUI"]
Server --> C3["Client 3
External"]
Server --> Debug["Debug Socket
Headless testing"]

subgraph Swarm["Swarm - Same Working Directory"]
Fox["fox
(agent)"]
Oak["oak
(agent)"]
River["river
(agent)"]

Fox <--> Coord["Conflict Detection
File Touch Events
Shared Context"]
Oak <--> Coord
River <--> Coord
end

Server --> Swarm

style Server fill:#6366f1,color:#fff
style Debug fill:#64748b,color:#fff
style Coord fill:#ef4444,color:#fff
style Fox fill:#f97316,color:#fff
style Oak fill:#10b981,color:#fff
style River fill:#3b82f6,color:#fff
```

**Protocol (newline-delimited JSON over Unix socket):**
- **Requests:** Message, Cancel, Subscribe, ResumeSession, CycleModel, SetModel, CommShare, CommMessage, ...
- **Events:** TextDelta, ToolStart, ToolResult, TurnComplete, TokenUsage, Notification, SwarmStatus, ...

TUI Rendering


```mermaid
graph LR
Frame["render_frame()"]

Frame --> Layout["Layout Calculation
header · messages · input · status"]
Layout --> MD["Markdown Parsing
parse_markdown() → Vec<Block>"]
MD --> Syntax["Syntax Highlighting
50+ languages"]
Syntax --> Wrap["Text Wrapping
terminal width"]
Wrap --> Render["Render to Terminal
crossterm backend"]

style Frame fill:#ec4899,color:#fff
style Syntax fill:#8b5cf6,color:#fff
style Render fill:#10b981,color:#fff
```

**Rendering Performance:**

| Mode | Avg Frame Time | FPS | Memory |
|---|---|---|---|
| Idle (200 turns) | 0.68 ms | 1,475 | 18 MB |
| Streaming | 0.67 ms | 1,498 | 18 MB |

*Measured with 200 conversation turns, full markdown + syntax highlighting, 120×40 terminal.*

**Key UI Components:**
- **InfoWidget** - floating panel showing model, context usage, todos, session count
- **Session Picker** - interactive split-pane browser with conversation previews
- **Mermaid Diagrams** - rendered natively as inline images (Sixel/Kitty/iTerm2 protocols)
- **Visual Debug** - frame-by-frame capture for debugging rendering

Session & Memory


```mermaid
graph TB
Agent["Agent"] --> Session["Session
session_abc123_fox"]
Agent --> Memory["Memory System"]
Agent --> Compaction["Compaction Manager"]

Session --> Storage["~/.jcode/sessions/
session_*.json"]

Memory --> Global["Global Memories
~/.jcode/memory/global.json"]
Memory --> Project["Project Memories
~/.jcode/memory/projects/{hash}.json"]

Compaction --> Summary["Background Summarization
When context hits 80% of limit"]
Compaction --> RAG["Full History Kept
for RAG search"]

style Agent fill:#8b5cf6,color:#fff
style Session fill:#3b82f6,color:#fff
style Memory fill:#10b981,color:#fff
style Compaction fill:#f97316,color:#fff
```

**Compaction:** When context approaches the token limit, older turns are summarized in the background while recent turns are kept verbatim. Full history is always available for RAG search.

**Memory Categories:** `Fact` · `Preference` · `Entity` · `Correction` - with semantic search, graph traversal, and automatic extraction at session end.

MCP Integration


```mermaid
graph LR
Manager["MCP Manager"] --> Client1["MCP Client
JSON-RPC 2.0 / stdio"]
Manager --> Client2["MCP Client"]
Manager --> Client3["MCP Client"]

Client1 --> S1["playwright"]
Client2 --> S2["filesystem"]
Client3 --> S3["custom server"]

style Manager fill:#8b5cf6,color:#fff
style S1 fill:#3b82f6,color:#fff
style S2 fill:#10b981,color:#fff
style S3 fill:#64748b,color:#fff
```

Configure in `.claude/mcp.json` (project) or `~/.claude/mcp.json` (global):

```json
{
"servers": {
"playwright": {
"command": "npx",
"args": ["@anthropic/mcp-playwright"]
}
}
}
```

Tools are auto-registered as `mcp__servername__toolname` and available immediately.

Self-Dev Mode


```mermaid
graph TB
Stable["Stable Binary
(promoted)"]

Stable --> A["Session A
stable"]
Stable --> B["Session B
stable"]
Stable --> C["Session C
canary"]

C --> Reload["selfdev reload
Hot-restart with new binary"]
Reload -->|"restart"| Continue["Session resumes
with continuation context"]

style Stable fill:#10b981,color:#fff
style C fill:#f97316,color:#fff
style Continue fill:#10b981,color:#fff
```

jcode can develop itself - edit code, build, hot-reload, and test in-place. After reload, the session resumes with continuation context so work can continue immediately.

Module Map


```mermaid
graph TB
main["main.rs"] --> tui["tui/"]
main --> server["server.rs"]
main --> agent["agent.rs"]

server --> protocol["protocol.rs"]
server --> bus["bus.rs"]

tui --> protocol
tui --> bus

agent --> session["session.rs"]
agent --> compaction["compaction.rs"]
agent --> provider["provider/"]
agent --> tools["tool/"]
agent --> mcp["mcp/"]

provider --> auth["auth/"]
tools --> memory["memory.rs"]
mcp --> skill["skill.rs"]
auth --> config["config.rs"]
config --> storage["storage.rs"]
storage --> id["id.rs"]

style main fill:#f97316,color:#fff
style agent fill:#8b5cf6,color:#fff
style tui fill:#ec4899,color:#fff
style server fill:#6366f1,color:#fff
style provider fill:#3b82f6,color:#fff
style tools fill:#10b981,color:#fff
```

**~92,000 lines of Rust** across 106 source files.

---

## OpenClaw Implementation — Ambient Mode

Ambient mode is Jcode's always-on autonomous agent. When you're not actively coding, it runs in the background — gardening your memory graph, doing proactive work, and staying reachable via Telegram. (IOS app planned)

Think of it like a brain consolidating memories during sleep: it merges duplicates, resolves contradictions, verifies stale facts against your codebase, and extracts missed context from crashed sessions.

**Key capabilities:**

- **Memory gardening** — consolidates duplicates, prunes dead memories, discovers new relationships, backfills embeddings
- **Proactive work** — analyzes recent sessions and git history to identify useful tasks you'd appreciate being surprised by
- **Telegram integration** — sends status updates and accepts directives mid-cycle via bot replies
- **Self-scheduling** — the agent decides when to wake next, constrained by adaptive resource limits that never starve interactive sessions
- **Safety-first** — code changes go on worktree branches with permission requests; conservative by default

Ambient Cycle Architecture


```mermaid
graph TB
subgraph "Scheduling Layer"
EV[Event Triggers
session close, crash, git push]
TM[Timer
agent-scheduled wake]
RC[Resource Calculator
adaptive interval]
SQ[(Scheduled Queue
persistent)]
end

subgraph "Ambient Agent"
QC[Check Queue]
SC[Scout
memories + sessions + git]
GD[Garden
consolidate + prune + verify]
WK[Work
proactive tasks]
SA[schedule_ambient tool
set next wake + context]
end

subgraph "Resource Awareness"
UH[Usage History
rolling window]
RL[Rate Limits
per provider]
AU[Ambient Usage
current window]
AC[Active Sessions
user activity]
end

EV -->|wake early| RC
TM -->|scheduled wake| RC
RC -->|"gate: safe to run?"| QC
SQ -->|pending items| QC
QC --> SC
SC --> GD
SC --> WK
SA -->|next wake + context| SQ
SA -->|proposed interval| RC

UH --> RC
RL --> RC
AU --> RC
AC --> RC

style EV fill:#fff3e0
style TM fill:#fff3e0
style RC fill:#ffcdd2
style SQ fill:#e3f2fd
style QC fill:#e8f5e9
style SC fill:#e8f5e9
style GD fill:#e8f5e9
style WK fill:#e8f5e9
```

Two-Layer Memory Consolidation


Memory consolidation happens at two levels — fast inline checks during sessions, and deep graph-wide passes during ambient cycles:

```mermaid
graph LR
subgraph "Layer 1: Sidecar (every turn, fast)"
S1[Memory retrieved
for relevance check]
S2{New memory
similar to existing?}
S3[Reinforce existing
+ breadcrumb]
S4[Create new memory]
S5[Supersede if
contradicts]
end

subgraph "Layer 2: Ambient Garden (background, deep)"
A1[Full graph scan]
A2[Cross-session
dedup]
A3[Fact verification
against codebase]
A4[Retroactive
session extraction]
A5[Prune dead
memories]
A6[Relationship
discovery]
end

S1 --> S2
S2 -->|yes| S3
S2 -->|no| S4
S2 -->|contradicts| S5

A1 --> A2
A1 --> A3
A1 --> A4
A1 --> A5
A1 --> A6

style S1 fill:#e8f5e9
style S2 fill:#e8f5e9
style S3 fill:#e8f5e9
style S4 fill:#e8f5e9
style S5 fill:#e8f5e9
style A1 fill:#e3f2fd
style A2 fill:#e3f2fd
style A3 fill:#e3f2fd
style A4 fill:#e3f2fd
style A5 fill:#e3f2fd
style A6 fill:#e3f2fd
```

Provider Selection & Scheduling


OpenClaw prefers subscription-based providers (OAuth) so ambient cycles never burn API credits silently:

```mermaid
graph TD
START[Ambient Mode Start] --> CHECK1{OpenAI OAuth
available?}
CHECK1 -->|yes| OAI[Use OpenAI
strongest available]
CHECK1 -->|no| CHECK1B{Copilot OAuth
available?}
CHECK1B -->|yes| COP[Use Copilot
strongest available]
CHECK1B -->|no| CHECK2{Anthropic OAuth
available?}
CHECK2 -->|yes| ANT[Use Anthropic
strongest available]
CHECK2 -->|no| CHECK3{API key or OpenRouter +
config opt-in?}
CHECK3 -->|yes| API[Use API/OpenRouter
with budget cap]
CHECK3 -->|no| DISABLED[Ambient mode disabled
no provider available]

style OAI fill:#e8f5e9
style COP fill:#e8eaf6
style ANT fill:#fff3e0
style API fill:#ffcdd2
style DISABLED fill:#f5f5f5
```

The system adapts scheduling based on rate limit headers, user activity, and budget:

| Condition | Behavior |
|-----------|----------|
| User is active | Pause or throttle heavily |
| User idle for hours | Run more frequently |
| Hit a rate limit | Exponential backoff |
| Approaching end of window with budget left | Squeeze in extra cycles |

---

## Environment Variables

| Variable | Description |
|---|---|
| `ANTHROPIC_API_KEY` | Direct API key (overrides OAuth) |
| `OPENROUTER_API_KEY` | OpenRouter API key |
| `JCODE_ANTHROPIC_MODEL` | Override default Claude model |
| `JCODE_OPENROUTER_MODEL` | Override default OpenRouter model |
| `JCODE_ANTHROPIC_DEBUG` | Log API request payloads |
| `JCODE_NO_TELEMETRY` | Disable anonymous usage telemetry |
| `DO_NOT_TRACK` | Disable telemetry (standard convention) |

---

## macOS Notes

jcode runs natively on macOS (Apple Silicon & Intel). Key differences:

- **Sockets** use `$TMPDIR` instead of `$XDG_RUNTIME_DIR` (override with `$JCODE_RUNTIME_DIR`)
- **Clipboard** uses `osascript` / `NSPasteboard` for image paste
- **Terminal spawning** auto-detects Kitty, WezTerm, Alacritty, iTerm2, Terminal.app
- **Mermaid diagrams** rendered via pure-Rust SVG with Core Text font discovery

---

## Testing

```bash
cargo test # All tests
cargo test --test e2e # End-to-end only
cargo run --bin jcode-harness # Tool harness (--include-network for web)
scripts/agent_trace.sh # Full agent smoke test
scripts/check_warning_budget.sh # Ensure warning count does not regress
scripts/security_preflight.sh # Secret scan + advisory checks (when available)
scripts/refactor_phase1_verify.sh # Refactor safety suite (check + security + tests + e2e)
```

---

## Safe Refactor Sessions

Use an isolated jcode home + socket while refactoring so your live sessions keep running untouched:

```bash
# Show resolved isolated paths
scripts/refactor_shadow.sh env

# Verify isolation and permissions
scripts/refactor_shadow.sh check

# Build debug binary for the refactor environment
scripts/refactor_shadow.sh build

# Start isolated server (new terminal)
scripts/refactor_shadow.sh serve

# Attach isolated client (another terminal)
scripts/refactor_shadow.sh run
```

Security notes:
- Refuses to run against production `~/.jcode`
- Uses a separate socket (`JCODE_REF_SOCKET`) from your normal server
- Creates isolated home with private permissions (`700`)
- Refuses to remove stale paths unless they are actual sockets

---

**Built with Rust** · **MIT License**

[GitHub](https://github.com/1jehuang/jcode) · [Report Bug](https://github.com/1jehuang/jcode/issues) · [Request Feature](https://github.com/1jehuang/jcode/issues)