{"id":47744016,"url":"https://github.com/saferl-lab/nano-claude-code","last_synced_at":"2026-04-06T03:01:05.806Z","repository":{"id":348547092,"uuid":"1198604352","full_name":"SafeRL-Lab/nano-claude-code","owner":"SafeRL-Lab","description":"Nano Claude Code: A Fast, Easy-to-Use Python Reimplementation of Claude Code Supporting Any Model","archived":false,"fork":false,"pushed_at":"2026-04-04T01:00:21.000Z","size":722,"stargazers_count":180,"open_issues_count":1,"forks_count":93,"subscribers_count":3,"default_branch":"main","last_synced_at":"2026-04-04T01:02:19.294Z","etag":null,"topics":["agentic-ai","claude","claude-code","memory","python","skills"],"latest_commit_sha":null,"homepage":"https://deepwiki.com/SafeRL-Lab/nano-claude-code","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/SafeRL-Lab.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-04-01T15:23:36.000Z","updated_at":"2026-04-04T01:00:25.000Z","dependencies_parsed_at":"2026-04-04T01:00:25.407Z","dependency_job_id":null,"html_url":"https://github.com/SafeRL-Lab/nano-claude-code","commit_stats":null,"previous_names":["saferl-lab/nano-claude-code"],"tags_count":7,"template":false,"template_full_name":null,"purl":"pkg:github/SafeRL-Lab/nano-claude-code","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SafeRL-Lab%2Fnano-claude-code","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SafeRL-Lab%2Fnano-claude-code/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SafeRL-Lab%2Fnano-claude-code/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SafeRL-Lab%2Fnano-claude-code/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/SafeRL-Lab","download_url":"https://codeload.github.com/SafeRL-Lab/nano-claude-code/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SafeRL-Lab%2Fnano-claude-code/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31421869,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-05T00:25:07.052Z","status":"online","status_checked_at":"2026-04-05T02:00:05.211Z","response_time":75,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agentic-ai","claude","claude-code","memory","python","skills"],"created_at":"2026-04-03T00:20:41.979Z","updated_at":"2026-04-05T02:00:29.180Z","avatar_url":"https://github.com/SafeRL-Lab.png","language":"Python","readme":"\n\n\u003cdiv align=\"center\"\u003e\n  \u003ca href=\"[https://github.com/SafeRL-Lab/Robust-Gymnasium](https://github.com/SafeRL-Lab/nano-claude-code)\"\u003e\n    \u003cimg src=\"https://github.com/SafeRL-Lab/nano-claude-code/blob/main/docs/logo-v1.png\" alt=\"Logo\" width=\"280\"\u003e \n  \u003c/a\u003e\n\n  \n\u003ch1 align=\"center\" style=\"font-size: 30px;\"\u003e\u003cstrong\u003e\u003cem\u003eNano Claude Code\u003c/em\u003e\u003c/strong\u003e: A Minimal Python Reimplementation\u003c/h1\u003e\n\u003cp align=\"center\"\u003e\n    \u003ca href=\"https://github.com/chauncygu/collection-claude-code-source-code\"\u003eThe newest source of Claude Code\u003c/a\u003e\n    ·\n    \u003ca href=\"https://github.com/SafeRL-Lab/nano-claude-code/issues\"\u003eIssue\u003c/a\u003e\n  \u003c/p\u003e\n\u003c/div\u003e\n\n \u003cdiv align=center\u003e\n \u003cimg src=\"https://github.com/SafeRL-Lab/nano-claude-code/blob/main/docs/demo.gif\" width=\"850\"/\u003e \n \u003c/div\u003e\n\u003cdiv align=center\u003e\n\u003ccenter style=\"color:#000000;text-decoration:underline\"\u003e \u003c/center\u003e\n \u003c/div\u003e\n\n---\n\n## 🔥🔥🔥 News (Pacific Time)\n- 12:20 PM, Apr 02, 2026: **v3.0** — Multi-agent packages (`multi_agent/`), memory package (`memory/`), skill package (`skill/`) with built-in skills, argument substitution, fork/inline execution, AI memory search, git worktree isolation, agent type definitions (**~5000** lines of Python), see [update](https://github.com/SafeRL-Lab/nano-claude-code/blob/main/Update_README.MD).\n- 10:00 AM, Apr 02, 2026: **v2.0** — Context compression, memory, sub-agents, skills, diff view, tool plugin system (**~3400** lines of Python Code).\n- 01:47 PM, Apr 01, 2026: Support VLLM inference (**~2000** lines of Python Code).\n- 11:30 AM, Apr 01, 2026: Support more **closed-source** models and **open-source models**: Claude, GPT, Gemini, Kimi, Qwen, Zhipu, DeepSeek, and local open-source models via Ollama or any OpenAI-compatible endpoint. (**~1700** lines of Python Code).\n- 09:50 AM, Apr 01, 2026: Support more **closed-source** models: Claude, GPT, Gemini. (**~1300** lines of Python Code).\n- 08:23 AM, Apr 01, 2026: Release the initial version of Nano Claude Code (**~900 lines** of Python Code).\n\n---\n\n# Nano Claude Code\n\nA minimal Python implementation of Claude Code in ~900 lines (Initial version), **supporting Claude, GPT, Gemini, Kimi, Qwen, Zhipu, DeepSeek, and local open-source models via Ollama or any OpenAI-compatible endpoint.**\n\n---\n\n## Content\n  * [Features](#features)\n  * [Supported Models](#supported-models)\n  * [Installation](#installation)\n  * [Usage: Closed-Source API Models](#usage--closed-source-api-models)\n  * [Usage: Open-Source Models (Local)](#usage--open-source-models--local-)\n  * [Model Name Format](#model-name-format)\n  * [CLI Reference](#cli-reference)\n  * [Slash Commands (REPL)](#slash-commands--repl-)\n  * [Configuring API Keys](#configuring-api-keys)\n  * [Permission System](#permission-system)\n  * [Built-in Tools](#built-in-tools)\n  * [Memory](#memory)\n  * [Skills](#skills)\n  * [Sub-Agents](#sub-agents)\n  * [Context Compression](#context-compression)\n  * [Diff View](#diff-view)\n  * [CLAUDE.md Support](#claudemd-support)\n  * [Session Management](#session-management)\n  * [Project Structure](#project-structure)\n  * [FAQ](#faq)\n\n\n\n\n## Features\n\n| Feature | Details |\n|---|---|\n| Multi-provider | Anthropic · OpenAI · Gemini · Kimi · Qwen · Zhipu · DeepSeek · Ollama · LM Studio · Custom endpoint |\n| Interactive REPL | readline history, Tab-complete slash commands |\n| Agent loop | Streaming API + automatic tool-use loop |\n| 18 built-in tools | Read · Write · Edit · Bash · Glob · Grep · WebFetch · WebSearch · MemorySave · MemoryDelete · MemorySearch · MemoryList · Agent · SendMessage · CheckAgentResult · ListAgentTasks · ListAgentTypes · Skill · SkillList |\n| Diff view | Git-style red/green diff display for Edit and Write |\n| Context compression | Auto-compact long conversations to stay within model limits |\n| Persistent memory | Dual-scope memory (user + project) with 4 types, AI search, staleness warnings |\n| Multi-agent | Spawn typed sub-agents (coder/reviewer/researcher/…), git worktree isolation, background mode |\n| Skills | Built-in `/commit` · `/review` + custom markdown skills with argument substitution and fork/inline execution |\n| Plugin tools | Register custom tools via `tool_registry.py` |\n| Permission system | `auto` / `accept-all` / `manual` modes |\n| 17 slash commands | `/model` · `/config` · `/save` · `/cost` · `/memory` · `/skills` · `/agents` · … |\n| Context injection | Auto-loads `CLAUDE.md`, git status, cwd, persistent memory |\n| Session persistence | Save / load conversations to `~/.nano_claude/sessions/` |\n| Extended Thinking | Toggle on/off (Claude models only) |\n| Cost tracking | Token usage + estimated USD cost |\n| Non-interactive mode | `--print` flag for scripting / CI |\n\n---\n\n## Supported Models\n\n### Closed-Source (API)\n\n| Provider | Model | Context | Strengths | API Key Env |\n|---|---|---|---|---|\n| **Anthropic** | `claude-opus-4-6` | 200k | Most capable, best for complex reasoning | `ANTHROPIC_API_KEY` |\n| **Anthropic** | `claude-sonnet-4-6` | 200k | Balanced speed \u0026 quality | `ANTHROPIC_API_KEY` |\n| **Anthropic** | `claude-haiku-4-5-20251001` | 200k | Fast, cost-efficient | `ANTHROPIC_API_KEY` |\n| **OpenAI** | `gpt-4o` | 128k | Strong multimodal \u0026 coding | `OPENAI_API_KEY` |\n| **OpenAI** | `gpt-4o-mini` | 128k | Fast, cheap | `OPENAI_API_KEY` |\n| **OpenAI** | `o3-mini` | 200k | Strong reasoning | `OPENAI_API_KEY` |\n| **OpenAI** | `o1` | 200k | Advanced reasoning | `OPENAI_API_KEY` |\n| **Google** | `gemini-2.5-pro-preview-03-25` | 1M | Long context, multimodal | `GEMINI_API_KEY` |\n| **Google** | `gemini-2.0-flash` | 1M | Fast, large context | `GEMINI_API_KEY` |\n| **Google** | `gemini-1.5-pro` | 2M | Largest context window | `GEMINI_API_KEY` |\n| **Moonshot (Kimi)** | `moonshot-v1-8k` | 8k | Chinese \u0026 English | `MOONSHOT_API_KEY` |\n| **Moonshot (Kimi)** | `moonshot-v1-32k` | 32k | Chinese \u0026 English | `MOONSHOT_API_KEY` |\n| **Moonshot (Kimi)** | `moonshot-v1-128k` | 128k | Long context | `MOONSHOT_API_KEY` |\n| **Alibaba (Qwen)** | `qwen-max` | 32k | Best Qwen quality | `DASHSCOPE_API_KEY` |\n| **Alibaba (Qwen)** | `qwen-plus` | 128k | Balanced | `DASHSCOPE_API_KEY` |\n| **Alibaba (Qwen)** | `qwen-turbo` | 1M | Fast, cheap | `DASHSCOPE_API_KEY` |\n| **Alibaba (Qwen)** | `qwq-32b` | 32k | Strong reasoning | `DASHSCOPE_API_KEY` |\n| **Zhipu (GLM)** | `glm-4-plus` | 128k | Best GLM quality | `ZHIPU_API_KEY` |\n| **Zhipu (GLM)** | `glm-4` | 128k | General purpose | `ZHIPU_API_KEY` |\n| **Zhipu (GLM)** | `glm-4-flash` | 128k | Free tier available | `ZHIPU_API_KEY` |\n| **DeepSeek** | `deepseek-chat` | 64k | Strong coding | `DEEPSEEK_API_KEY` |\n| **DeepSeek** | `deepseek-reasoner` | 64k | Chain-of-thought reasoning | `DEEPSEEK_API_KEY` |\n\n### Open-Source (Local via Ollama)\n\n| Model | Size | Strengths | Pull Command |\n|---|---|---|---|\n| `llama3.3` | 70B | General purpose, strong reasoning | `ollama pull llama3.3` |\n| `llama3.2` | 3B / 11B | Lightweight | `ollama pull llama3.2` |\n| `qwen2.5-coder` | 7B / 32B | **Best for coding tasks** | `ollama pull qwen2.5-coder` |\n| `qwen2.5` | 7B / 72B | Chinese \u0026 English | `ollama pull qwen2.5` |\n| `deepseek-r1` | 7B–70B | Reasoning, math | `ollama pull deepseek-r1` |\n| `deepseek-coder-v2` | 16B | Coding | `ollama pull deepseek-coder-v2` |\n| `mistral` | 7B | Fast, efficient | `ollama pull mistral` |\n| `mixtral` | 8x7B | Strong MoE model | `ollama pull mixtral` |\n| `phi4` | 14B | Microsoft, strong reasoning | `ollama pull phi4` |\n| `gemma3` | 4B / 12B / 27B | Google open model | `ollama pull gemma3` |\n| `codellama` | 7B / 34B | Code generation | `ollama pull codellama` |\n\n\u003e **Note:** Tool calling requires a model that supports function calling. Recommended local models: `qwen2.5-coder`, `llama3.3`, `mistral`, `phi4`.\n\n---\n\n## Installation\n\n```bash\ngit clone \u003crepo-url\u003e\ncd nano_claude_code\n\npip install -r requirements.txt\n# or manually:\npip install anthropic openai httpx rich\n```\n\n---\n\n## Usage: Closed-Source API Models\n\n### Anthropic Claude\n\nGet your API key at [console.anthropic.com](https://console.anthropic.com).\n\n```bash\nexport ANTHROPIC_API_KEY=sk-ant-api03-...\n\n# Default model (claude-opus-4-6)\npython nano_claude.py\n\n# Choose a specific model\npython nano_claude.py --model claude-sonnet-4-6\npython nano_claude.py --model claude-haiku-4-5-20251001\n\n# Enable Extended Thinking\npython nano_claude.py --model claude-opus-4-6 --thinking --verbose\n```\n\n### OpenAI GPT\n\nGet your API key at [platform.openai.com](https://platform.openai.com).\n\n```bash\nexport OPENAI_API_KEY=sk-...\n\npython nano_claude.py --model gpt-4o\npython nano_claude.py --model gpt-4o-mini\npython nano_claude.py --model gpt-4.1-mini\npython nano_claude.py --model o3-mini\n```\n\n### Google Gemini\n\nGet your API key at [aistudio.google.com](https://aistudio.google.com).\n\n```bash\nexport GEMINI_API_KEY=AIza...\n\npython nano_claude.py --model gemini/gemini-2.0-flash\npython nano_claude.py --model gemini/gemini-1.5-pro\npython nano_claude.py --model gemini/gemini-2.5-pro-preview-03-25\n```\n\n### Kimi (Moonshot AI)\n\nGet your API key at [platform.moonshot.cn](https://platform.moonshot.cn).\n\n```bash\nexport MOONSHOT_API_KEY=sk-...\n\npython nano_claude.py --model kimi/moonshot-v1-32k\npython nano_claude.py --model kimi/moonshot-v1-128k\n```\n\n### Qwen (Alibaba DashScope)\n\nGet your API key at [dashscope.aliyun.com](https://dashscope.aliyun.com).\n\n```bash\nexport DASHSCOPE_API_KEY=sk-...\n\npython nano_claude.py --model qwen/Qwen3.5-Plus\npython nano_claude.py --model qwen/Qwen3-MAX\npython nano_claude.py --model qwen/Qwen3.5-Flash\n```\n\n### Zhipu GLM\n\nGet your API key at [open.bigmodel.cn](https://open.bigmodel.cn).\n\n```bash\nexport ZHIPU_API_KEY=...\n\npython nano_claude.py --model zhipu/glm-4-plus\npython nano_claude.py --model zhipu/glm-4-flash   # free tier\n```\n\n### DeepSeek\n\nGet your API key at [platform.deepseek.com](https://platform.deepseek.com).\n\n```bash\nexport DEEPSEEK_API_KEY=sk-...\n\npython nano_claude.py --model deepseek/deepseek-chat\npython nano_claude.py --model deepseek/deepseek-reasoner\n```\n\n---\n\n## Usage: Open-Source Models (Local)\n\n### Option A — Ollama (Recommended)\n\nOllama runs models locally with zero configuration. No API key required.\n\n**Step 1: Install Ollama**\n\n```bash\n# macOS / Linux\ncurl -fsSL https://ollama.com/install.sh | sh\n\n# Or download from https://ollama.com/download\n```\n\n**Step 2: Pull a model**\n\n```bash\n# Best for coding (recommended)\nollama pull qwen2.5-coder          # 4.7 GB (7B)\nollama pull qwen2.5-coder:32b      # 19 GB (32B)\n\n# General purpose\nollama pull llama3.3               # 42 GB (70B)\nollama pull llama3.2               # 2.0 GB (3B)\n\n# Reasoning\nollama pull deepseek-r1            # 4.7 GB (7B)\nollama pull deepseek-r1:32b        # 19 GB (32B)\n\n# Other\nollama pull phi4                   # 9.1 GB (14B)\nollama pull mistral                # 4.1 GB (7B)\n```\n\n**Step 3: Start Ollama server** (runs automatically on macOS; on Linux run manually)\n\n```bash\nollama serve     # starts on http://localhost:11434\n```\n\n**Step 4: Run nano claude**\n\n```bash\npython nano_claude.py --model ollama/qwen2.5-coder\npython nano_claude.py --model ollama/llama3.3\npython nano_claude.py --model ollama/deepseek-r1\n```\n\n**List your locally available models:**\n\n```bash\nollama list\n```\n\nThen use any model from the list:\n\n```bash\npython nano_claude.py --model ollama/\u003cmodel-name\u003e\n```\n\n---\n\n### Option B — LM Studio\n\nLM Studio provides a GUI to download and run models, with a built-in OpenAI-compatible server.\n\n**Step 1:** Download [LM Studio](https://lmstudio.ai) and install it.\n\n**Step 2:** Search and download a model inside LM Studio (GGUF format).\n\n**Step 3:** Go to **Local Server** tab → click **Start Server** (default port: 1234).\n\n**Step 4:**\n\n```bash\npython nano_claude.py --model lmstudio/\u003cmodel-name\u003e\n# e.g.:\npython nano_claude.py --model lmstudio/phi-4-GGUF\npython nano_claude.py --model lmstudio/qwen2.5-coder-7b\n```\n\nThe model name should match what LM Studio shows in the server status bar.\n\n---\n\n### Option C — vLLM / Self-Hosted OpenAI-Compatible Server\n\nFor self-hosted inference servers (vLLM, TGI, llama.cpp server, etc.) that expose an OpenAI-compatible API:\n\nQuick Start for option C:\nStep 1: Start vllm:\n ```\nCUDA_VISIBLE_DEVICES=7 python -m vllm.entrypoints.openai.api_server \\\n      --model Qwen/Qwen2.5-Coder-7B-Instruct \\\n      --host 0.0.0.0 \\\n      --port 8000 \\\n      --enable-auto-tool-choice \\\n      --tool-call-parser hermes\n```\n\n\n Step 2: Start nano claude：\n```\n  export CUSTOM_BASE_URL=http://localhost:8000/v1\n  export CUSTOM_API_KEY=none\n  python nano_claude.py --model custom/Qwen/Qwen2.5-Coder-7B-Instruct\n```\n\n\n```bash\n# Example: vLLM serving Qwen2.5-Coder-32B\npython -m vllm.entrypoints.openai.api_server \\\n    --model Qwen/Qwen2.5-Coder-32B-Instruct \\\n    --port 8000\n\n# Then run nano claude pointing to your server:\npython nano_claude.py\n```\n\nInside the REPL:\n\n```\n/config custom_base_url=http://localhost:8000/v1\n/config custom_api_key=token-abc123    # skip if no auth\n/model custom/Qwen2.5-Coder-32B-Instruct\n```\n\nOr set via environment:\n\n```bash\nexport CUSTOM_BASE_URL=http://localhost:8000/v1\nexport CUSTOM_API_KEY=token-abc123\n\npython nano_claude.py --model custom/Qwen2.5-Coder-32B-Instruct\n```\n\nFor a remote GPU server:\n\n```bash\n/config custom_base_url=http://192.168.1.100:8000/v1\n/model custom/your-model-name\n```\n\n---\n\n## Model Name Format\n\nThree equivalent formats are supported:\n\n```bash\n# 1. Auto-detect by prefix (works for well-known models)\npython nano_claude.py --model gpt-4o\npython nano_claude.py --model gemini-2.0-flash\npython nano_claude.py --model deepseek-chat\n\n# 2. Explicit provider prefix with slash\npython nano_claude.py --model ollama/qwen2.5-coder\npython nano_claude.py --model kimi/moonshot-v1-128k\n\n# 3. Explicit provider prefix with colon (also works)\npython nano_claude.py --model kimi:moonshot-v1-32k\npython nano_claude.py --model qwen:qwen-max\n```\n\n**Auto-detection rules:**\n\n| Model prefix | Detected provider |\n|---|---|\n| `claude-` | anthropic |\n| `gpt-`, `o1`, `o3` | openai |\n| `gemini-` | gemini |\n| `moonshot-`, `kimi-` | kimi |\n| `qwen`, `qwq-` | qwen |\n| `glm-` | zhipu |\n| `deepseek-` | deepseek |\n| `llama`, `mistral`, `phi`, `gemma`, `mixtral`, `codellama` | ollama |\n\n---\n\n## CLI Reference\n\n```\npython nano_claude.py [OPTIONS] [PROMPT]\n\nOptions:\n  -p, --print          Non-interactive: run prompt and exit\n  -m, --model MODEL    Override model (e.g. gpt-4o, ollama/llama3.3)\n  --accept-all         Auto-approve all operations (no permission prompts)\n  --verbose            Show thinking blocks and per-turn token counts\n  --thinking           Enable Extended Thinking (Claude only)\n  --version            Print version and exit\n  -h, --help           Show help\n```\n\n**Examples:**\n\n```bash\n# Interactive REPL with default model\npython nano_claude.py\n\n# Switch model at startup\npython nano_claude.py --model gpt-4o\npython nano_claude.py -m ollama/deepseek-r1:32b\n\n# Non-interactive / scripting\npython nano_claude.py --print \"Write a Python fibonacci function\"\npython nano_claude.py -p \"Explain the Rust borrow checker in 3 sentences\" -m gemini/gemini-2.0-flash\n\n# CI / automation (no permission prompts)\npython nano_claude.py --accept-all --print \"Initialize a Python project with pyproject.toml\"\n\n# Debug mode (see tokens + thinking)\npython nano_claude.py --thinking --verbose\n```\n\n---\n\n## Slash Commands (REPL)\n\nType `/` and press **Tab** to autocomplete.\n\n| Command | Description |\n|---|---|\n| `/help` | Show all commands |\n| `/clear` | Clear conversation history |\n| `/model` | Show current model + list all available models |\n| `/model \u003cname\u003e` | Switch model (takes effect immediately) |\n| `/config` | Show all current config values |\n| `/config key=value` | Set a config value (persisted to disk) |\n| `/save` | Save session (auto-named by timestamp) |\n| `/save \u003cfilename\u003e` | Save session to named file |\n| `/load` | List all saved sessions |\n| `/load \u003cfilename\u003e` | Load a saved session |\n| `/history` | Print full conversation history |\n| `/context` | Show message count and token estimate |\n| `/cost` | Show token usage and estimated USD cost |\n| `/verbose` | Toggle verbose mode (tokens + thinking) |\n| `/thinking` | Toggle Extended Thinking (Claude only) |\n| `/permissions` | Show current permission mode |\n| `/permissions \u003cmode\u003e` | Set permission mode: `auto` / `accept-all` / `manual` |\n| `/cwd` | Show current working directory |\n| `/cwd \u003cpath\u003e` | Change working directory |\n| `/memory` | List all persistent memories |\n| `/memory \u003cquery\u003e` | Search memories by keyword |\n| `/skills` | List available skills |\n| `/agents` | Show sub-agent task status |\n| `/exit` / `/quit` | Exit |\n\n**Switching models inside a session:**\n\n```\n[myproject] ❯ /model\n  Current model: claude-opus-4-6  (provider: anthropic)\n\n  Available models by provider:\n    anthropic     claude-opus-4-6, claude-sonnet-4-6, ...\n    openai        gpt-4o, gpt-4o-mini, o3-mini, ...\n    ollama        llama3.3, llama3.2, phi4, mistral, ...\n    ...\n\n[myproject] ❯ /model gpt-4o\n  Model set to gpt-4o  (provider: openai)\n\n[myproject] ❯ /model ollama/qwen2.5-coder\n  Model set to ollama/qwen2.5-coder  (provider: ollama)\n```\n\n---\n\n## Configuring API Keys\n\n### Method 1: Environment Variables (recommended)\n\n```bash\n# Add to ~/.bashrc or ~/.zshrc\nexport ANTHROPIC_API_KEY=sk-ant-...\nexport OPENAI_API_KEY=sk-...\nexport GEMINI_API_KEY=AIza...\nexport MOONSHOT_API_KEY=sk-...       # Kimi\nexport DASHSCOPE_API_KEY=sk-...      # Qwen\nexport ZHIPU_API_KEY=...             # Zhipu GLM\nexport DEEPSEEK_API_KEY=sk-...       # DeepSeek\n```\n\n### Method 2: Set Inside the REPL (persisted)\n\n```\n/config anthropic_api_key=sk-ant-...\n/config openai_api_key=sk-...\n/config gemini_api_key=AIza...\n/config kimi_api_key=sk-...\n/config qwen_api_key=sk-...\n/config zhipu_api_key=...\n/config deepseek_api_key=sk-...\n```\n\nKeys are saved to `~/.nano_claude/config.json` and loaded automatically on next launch.\n\n### Method 3: Edit the Config File Directly\n\n```json\n// ~/.nano_claude/config.json\n{\n  \"model\": \"qwen/qwen-max\",\n  \"max_tokens\": 8192,\n  \"permission_mode\": \"auto\",\n  \"verbose\": false,\n  \"thinking\": false,\n  \"qwen_api_key\": \"sk-...\",\n  \"kimi_api_key\": \"sk-...\",\n  \"deepseek_api_key\": \"sk-...\"\n}\n```\n\n---\n\n## Permission System\n\n| Mode | Behavior |\n|---|---|\n| `auto` (default) | Read-only operations always allowed. Prompts before Bash commands and file writes. |\n| `accept-all` | Never prompts. All operations proceed automatically. |\n| `manual` | Prompts before every single operation, including reads. |\n\n**When prompted:**\n\n```\n  Allow: Run: git commit -am \"fix bug\"  [y/N/a(ccept-all)]\n```\n\n- `y` — approve this one action\n- `n` or Enter — deny\n- `a` — approve and switch to `accept-all` for the rest of the session\n\n**Commands always auto-approved in `auto` mode:**\n`ls`, `cat`, `head`, `tail`, `wc`, `pwd`, `echo`, `git status`, `git log`, `git diff`, `git show`, `find`, `grep`, `rg`, `python`, `node`, `pip show`, `npm list`, and other read-only shell commands.\n\n---\n\n## Built-in Tools\n\n### Core Tools\n\n| Tool | Description | Key Parameters |\n|---|---|---|\n| `Read` | Read file with line numbers | `file_path`, `limit`, `offset` |\n| `Write` | Create or overwrite file (shows diff) | `file_path`, `content` |\n| `Edit` | Exact string replacement (shows diff) | `file_path`, `old_string`, `new_string`, `replace_all` |\n| `Bash` | Execute shell command | `command`, `timeout` (default 30s) |\n| `Glob` | Find files by glob pattern | `pattern` (e.g. `**/*.py`), `path` |\n| `Grep` | Regex search in files (uses ripgrep if available) | `pattern`, `path`, `glob`, `output_mode` |\n| `WebFetch` | Fetch and extract text from URL | `url`, `prompt` |\n| `WebSearch` | Search the web via DuckDuckGo | `query` |\n\n### Memory Tools\n\n| Tool | Description | Key Parameters |\n|---|---|---|\n| `MemorySave` | Save or update a persistent memory | `name`, `type`, `description`, `content`, `scope` |\n| `MemoryDelete` | Delete a memory by name | `name`, `scope` |\n| `MemorySearch` | Search memories by keyword (or AI ranking) | `query`, `scope`, `use_ai`, `max_results` |\n| `MemoryList` | List all memories with age and metadata | `scope` |\n\n### Sub-Agent Tools\n\n| Tool | Description | Key Parameters |\n|---|---|---|\n| `Agent` | Spawn a sub-agent for a task | `prompt`, `subagent_type`, `isolation`, `name`, `model`, `wait` |\n| `SendMessage` | Send a message to a named background agent | `name`, `message` |\n| `CheckAgentResult` | Check status/result of a background agent | `task_id` |\n| `ListAgentTasks` | List all active and finished agent tasks | — |\n| `ListAgentTypes` | List available agent type definitions | — |\n\n### Skill Tools\n\n| Tool | Description | Key Parameters |\n|---|---|---|\n| `Skill` | Invoke a skill by name from within the conversation | `name`, `args` |\n| `SkillList` | List all available skills with triggers and metadata | — |\n\n\u003e **Adding custom tools:** See [Architecture Guide](docs/architecture.md#tool-registry) for how to register your own tools.\n\n---\n\n## Memory\n\nThe model can remember things across conversations using the built-in memory system.\n\n**How it works:** Memories are stored as markdown files. There are two scopes:\n- **User scope** (`~/.nano_claude/memory/`) — follows you across all projects\n- **Project scope** (`.nano_claude/memory/` in cwd) — specific to the current repo\n\nA `MEMORY.md` index (≤ 200 lines / 25 KB) is auto-rebuilt on every save or delete and injected into the system prompt so Claude always has an overview.\n\n**Memory types:**\n\n| Type | Use for |\n|---|---|\n| `user` | Your role, preferences, background |\n| `feedback` | How you want the model to behave |\n| `project` | Ongoing work, deadlines, decisions |\n| `reference` | Links to external resources |\n\n**Memory file format** (`~/.nano_claude/memory/coding_style.md`):\n```markdown\n---\nname: coding style\ndescription: Python formatting preferences\ntype: feedback\ncreated: 2026-04-02\n---\nPrefer 4-space indentation and full type hints in all Python code.\n**Why:** user explicitly stated this preference.\n**How to apply:** apply to every Python file written or edited.\n```\n\n**Example interaction:**\n\n```\nYou: Remember that I prefer 4-space indentation and type hints in all Python code.\nAI: [calls MemorySave] Memory saved: coding_style [feedback/user]\n\nYou: /memory\n  [feedback/user] coding_style (today): Python formatting preferences\n\nYou: /memory python\n  [feedback/user] coding_style: Prefers 4-space indent and type hints in Python\n```\n\n**Staleness warnings:** Memories older than 1 day get a freshness note in `/memory` output so you know when to review or update them.\n\n**AI-ranked search:** `MemorySearch(query=\"...\", use_ai=true)` uses the model to rank results by relevance rather than simple keyword matching.\n\n---\n\n## Skills\n\nSkills are reusable prompt templates that give the model specialized capabilities. Two built-in skills ship out of the box — no setup required.\n\n**Built-in skills:**\n\n| Trigger | Description |\n|---|---|\n| `/commit` | Review staged changes and create a well-structured git commit |\n| `/review [PR]` | Review code or PR diff with structured feedback |\n\n**Quick start — custom skill:**\n\n```bash\nmkdir -p ~/.nano_claude/skills\n```\n\nCreate `~/.nano_claude/skills/deploy.md`:\n\n```markdown\n---\nname: deploy\ndescription: Deploy to an environment\ntriggers: [/deploy]\nallowed-tools: [Bash, Read]\nwhen_to_use: Use when the user wants to deploy a version to an environment.\nargument-hint: [env] [version]\narguments: [env, version]\ncontext: inline\n---\n\nDeploy $VERSION to the $ENV environment.\nFull args: $ARGUMENTS\n```\n\nNow use it:\n\n```\nYou: /deploy staging 2.1.0\nAI: [deploys version 2.1.0 to staging]\n```\n\n**Argument substitution:**\n- `$ARGUMENTS` — the full raw argument string\n- `$ARG_NAME` — positional substitution by named argument (first word → first name)\n- Missing args become empty strings\n\n**Execution modes:**\n- `context: inline` (default) — runs inside current conversation history\n- `context: fork` — runs as an isolated sub-agent with fresh history; supports `model` override\n\n**Priority** (highest wins): project-level \u003e user-level \u003e built-in\n\n**List skills:** `/skills` — shows triggers, argument hint, source, and `when_to_use`\n\n**Skill search paths:**\n\n```\n./.nano_claude/skills/     # project-level (overrides user-level)\n~/.nano_claude/skills/     # user-level\n```\n\n---\n\n## Sub-Agents\n\nThe model can spawn independent sub-agents to handle tasks in parallel.\n\n**Specialized agent types** — built-in:\n\n| Type | Optimized for |\n|---|---|\n| `general-purpose` | Research, exploration, multi-step tasks |\n| `coder` | Writing, reading, and modifying code |\n| `reviewer` | Security, correctness, and code quality analysis |\n| `researcher` | Web search and documentation lookup |\n| `tester` | Writing and running tests |\n\n**Basic usage:**\n```\nYou: Search this codebase for all TODO comments and summarize them.\nAI: [calls Agent(prompt=\"...\", subagent_type=\"researcher\")]\n    Sub-agent reads files, greps for TODOs...\n    Result: Found 12 TODOs across 5 files...\n```\n\n**Background mode** — spawn without waiting, collect result later:\n```\nAI: [calls Agent(prompt=\"run all tests\", name=\"test-runner\", wait=false)]\nAI: [continues other work...]\nAI: [calls CheckAgentResult / SendMessage to follow up]\n```\n\n**Git worktree isolation** — agents work on an isolated branch with no conflicts:\n```\nAgent(prompt=\"refactor auth module\", isolation=\"worktree\")\n```\nThe worktree is auto-cleaned up if no changes were made; otherwise the branch name is reported.\n\n**Custom agent types** — create `~/.nano_claude/agents/myagent.md`:\n```markdown\n---\nname: myagent\ndescription: Specialized for X\nmodel: claude-haiku-4-5-20251001\ntools: [Read, Grep, Bash]\n---\nExtra system prompt for this agent type.\n```\n\n**List running agents:** `/agents`\n\nSub-agents have independent conversation history, share the file system, and are limited to 3 levels of nesting.\n\n---\n\n## Context Compression\n\nLong conversations are automatically compressed to stay within the model's context window.\n\n**Two layers:**\n\n1. **Snip** — Old tool outputs (file reads, bash results) are truncated after a few turns. Fast, no API cost.\n2. **Auto-compact** — When token usage exceeds 70% of the context limit, older messages are summarized by the model into a concise recap.\n\nThis happens transparently. You don't need to do anything.\n\n---\n\n## Diff View\n\nWhen the model edits or overwrites a file, you see a git-style diff:\n\n```diff\n  Changes applied to config.py:\n\n--- a/config.py\n+++ b/config.py\n@@ -12,7 +12,7 @@\n     \"model\": \"claude-opus-4-6\",\n-    \"max_tokens\": 8192,\n+    \"max_tokens\": 16384,\n     \"permission_mode\": \"auto\",\n```\n\nGreen lines = added, red lines = removed. New file creations show a summary instead.\n\n---\n\n## CLAUDE.md Support\n\nPlace a `CLAUDE.md` file in your project to give the model persistent context about your codebase. Nano Claude automatically finds and injects it into the system prompt.\n\n```\n~/.claude/CLAUDE.md          # Global — applies to all projects\n/your/project/CLAUDE.md      # Project-level — found by walking up from cwd\n```\n\n**Example `CLAUDE.md`:**\n\n```markdown\n# Project: FastAPI Backend\n\n## Stack\n- Python 3.12, FastAPI, PostgreSQL, SQLAlchemy 2.0, Alembic\n- Tests: pytest, coverage target 90%\n\n## Conventions\n- Format with black, lint with ruff\n- Full type annotations required\n- New endpoints must have corresponding tests\n\n## Important Notes\n- Never hard-code credentials — use environment variables\n- Do not modify existing Alembic migration files\n- The `staging` branch deploys automatically to staging on push\n```\n\n---\n\n## Session Management\n\n```bash\n# Inside REPL:\n/save                          # auto-name: session_20260401_143022.json\n/save debug_auth_bug           # named save\n\n/load                          # list all saved sessions\n/load debug_auth_bug           # resume a session\n/load session_20260401_143022.json\n```\n\nSessions are stored as JSON in `~/.nano_claude/sessions/`.\n\n---\n\n## Project Structure\n\n```\nnano_claude_code/\n├── nano_claude.py        # Entry point: REPL + slash commands + diff rendering\n├── agent.py              # Agent loop: streaming, tool dispatch, compaction\n├── providers.py          # Multi-provider: Anthropic, OpenAI-compat streaming\n├── tools.py              # Core tools (Read/Write/Edit/Bash/Glob/Grep/Web) + registry wiring\n├── tool_registry.py      # Tool plugin registry: register, lookup, execute\n├── compaction.py         # Context compression: snip + auto-summarize\n├── context.py            # System prompt builder: CLAUDE.md + git + memory\n├── config.py             # Config load/save/defaults\n│\n├── multi_agent/          # Multi-agent package\n│   ├── __init__.py       # Re-exports\n│   ├── subagent.py       # AgentDefinition, SubAgentManager, worktree helpers\n│   └── tools.py          # Agent, SendMessage, CheckAgentResult, ListAgentTasks, ListAgentTypes\n├── subagent.py           # Backward-compat shim → multi_agent/\n│\n├── memory/               # Memory package\n│   ├── __init__.py       # Re-exports\n│   ├── types.py          # MEMORY_TYPES and format guidance\n│   ├── store.py          # save/load/delete/search, MEMORY.md index rebuilding\n│   ├── scan.py           # MemoryHeader, age/freshness helpers\n│   ├── context.py        # get_memory_context(), truncation, AI search\n│   └── tools.py          # MemorySave, MemoryDelete, MemorySearch, MemoryList\n├── memory.py             # Backward-compat shim → memory/\n│\n├── skill/                # Skill package\n│   ├── __init__.py       # Re-exports; imports builtin to register built-ins\n│   ├── loader.py         # SkillDef, parse, load_skills, find_skill, substitute_arguments\n│   ├── builtin.py        # Built-in skills: /commit, /review\n│   ├── executor.py       # execute_skill(): inline or forked sub-agent\n│   └── tools.py          # Skill, SkillList\n├── skills.py             # Backward-compat shim → skill/\n│\n└── tests/                # 101 unit tests\n    ├── test_memory.py\n    ├── test_skills.py\n    ├── test_subagent.py\n    ├── test_tool_registry.py\n    ├── test_compaction.py\n    └── test_diff_view.py\n```\n\n\u003e **For developers:** Each feature package (`multi_agent/`, `memory/`, `skill/`) is self-contained. Add custom tools by calling `register_tool(ToolDef(...))` from any module imported by `tools.py`.\n\n---\n\n## FAQ\n\n**Q: Tool calls don't work with my local Ollama model.**\n\nNot all models support function calling. Use one of the recommended tool-calling models: `qwen2.5-coder`, `llama3.3`, `mistral`, or `phi4`.\n\n```bash\nollama pull qwen2.5-coder\npython nano_claude.py --model ollama/qwen2.5-coder\n```\n\n**Q: How do I connect to a remote GPU server running vLLM?**\n\n```\n/config custom_base_url=http://your-server-ip:8000/v1\n/config custom_api_key=your-token\n/model custom/your-model-name\n```\n\n**Q: How do I check my API cost?**\n\n```\n/cost\n\n  Input tokens:  3,421\n  Output tokens:   892\n  Est. cost:     $0.0648 USD\n```\n\n**Q: Can I use multiple API keys in the same session?**\n\nYes. Set all the keys you need upfront (via env vars or `/config`). Then switch models freely — each call uses the key for the active provider.\n\n**Q: How do I make a model available across all projects?**\n\nAdd keys to `~/.bashrc` or `~/.zshrc`. Set the default model in `~/.nano_claude/config.json`:\n\n```json\n{ \"model\": \"claude-sonnet-4-6\" }\n```\n\n**Q: Qwen / Zhipu returns garbled text.**\n\nEnsure your `DASHSCOPE_API_KEY` / `ZHIPU_API_KEY` is correct and the account has sufficient quota. Both providers use UTF-8 and handle Chinese well.\n\n**Q: Can I pipe input to nano claude?**\n\n```bash\necho \"Explain this file\" | python nano_claude.py --print --accept-all\ncat error.log | python nano_claude.py -p \"What is causing this error?\"\n```\n\n**Q: How do I run it as a CLI tool from anywhere?**\n\n```bash\n# Add an alias to ~/.bashrc or ~/.zshrc\nalias nc='python /path/to/nano_claude_code/nano_claude.py'\n\n# Or install as a script\npip install -e .   # if setup.py exists\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsaferl-lab%2Fnano-claude-code","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsaferl-lab%2Fnano-claude-code","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsaferl-lab%2Fnano-claude-code/lists"}