{"id":48218564,"url":"https://github.com/codeany-ai/open-agent-sdk-python","last_synced_at":"2026-04-05T20:00:32.106Z","repository":{"id":348630682,"uuid":"1198713160","full_name":"codeany-ai/open-agent-sdk-python","owner":"codeany-ai","description":"Open-source Agent SDK for Python. Runs the full agent loop in-process — no CLI required.","archived":false,"fork":false,"pushed_at":"2026-04-03T16:01:26.000Z","size":340,"stargazers_count":14,"open_issues_count":0,"forks_count":10,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-04-04T19:57:38.642Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/codeany-ai.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-04-01T17:27:39.000Z","updated_at":"2026-04-03T16:01:31.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/codeany-ai/open-agent-sdk-python","commit_stats":null,"previous_names":["codeany-ai/open-agent-sdk-python"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/codeany-ai/open-agent-sdk-python","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/codeany-ai%2Fopen-agent-sdk-python","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/codeany-ai%2Fopen-agent-sdk-python/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/codeany-ai%2Fopen-agent-sdk-python/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/codeany-ai%2Fopen-agent-sdk-python/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/codeany-ai","download_url":"https://codeload.github.com/codeany-ai/open-agent-sdk-python/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/codeany-ai%2Fopen-agent-sdk-python/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31448216,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-05T15:22:31.103Z","status":"ssl_error","status_checked_at":"2026-04-05T15:22:00.205Z","response_time":75,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2026-04-04T19:03:07.381Z","updated_at":"2026-04-05T20:00:32.098Z","avatar_url":"https://github.com/codeany-ai.png","language":"Python","readme":"# Open Agent SDK (Python)\n\n[![PyPI version](https://img.shields.io/pypi/v/open-agent-sdk)](https://pypi.org/project/open-agent-sdk/)\n[![Python](https://img.shields.io/badge/python-%3E%3D3.10-brightgreen)](https://python.org)\n[![License: MIT](https://img.shields.io/badge/license-MIT-blue)](./LICENSE)\n\nOpen-source Agent SDK that runs the full agent loop **in-process** — no subprocess or CLI required. Deploy anywhere: cloud, serverless, Docker, CI/CD.\n\nAlso available in **TypeScript**: [open-agent-sdk-typescript](https://github.com/codeany-ai/open-agent-sdk-typescript) · **Go**: [open-agent-sdk-go](https://github.com/codeany-ai/open-agent-sdk-go)\n\n## Features\n\n- **Multi-Provider** — Anthropic + OpenAI-compatible APIs (DeepSeek, Qwen, vLLM, Ollama) via unified provider abstraction\n- **Agent Loop** — Streaming agentic loop with tool execution, multi-turn conversations, and cost tracking\n- **35 Built-in Tools** — Bash, Read, Write, Edit, Glob, Grep, WebFetch, WebSearch, Agent (subagents), Skill, and more\n- **Skill System** — Reusable prompt templates with 5 bundled skills (commit, review, debug, simplify, test)\n- **MCP Support** — Connect to MCP servers via stdio, HTTP, and SSE transports\n- **Permission System** — Configurable tool approval with allow/deny rules and custom callbacks\n- **Hook System** — 20 lifecycle events for agent behavior interception\n- **Session Persistence** — Save/load/fork conversation sessions\n- **Custom Tools** — Define tools with Pydantic models or raw JSON schemas\n- **Extended Thinking** — Claude thinking budget configuration\n- **Cost Tracking** — Per-model token usage with accurate pricing (Anthropic + OpenAI + DeepSeek + Qwen)\n\n## Get started\n\n```bash\npip install open-agent-sdk\n```\n\nSet your API key:\n\n```bash\nexport CODEANY_API_KEY=your-api-key\n```\n\nThird-party providers (e.g. OpenRouter) are supported via `CODEANY_BASE_URL`:\n\n```bash\nexport CODEANY_BASE_URL=https://openrouter.ai/api\nexport CODEANY_API_KEY=sk-or-...\nexport CODEANY_MODEL=anthropic/claude-sonnet-4\n```\n\n## Quick start\n\n### One-shot query (streaming)\n\n```python\nimport asyncio\nfrom open_agent_sdk import query, AgentOptions, SDKMessageType\n\nasync def main():\n    async for message in query(\n        prompt=\"Read pyproject.toml and tell me the project name.\",\n        options=AgentOptions(\n            allowed_tools=[\"Read\", \"Glob\"],\n            permission_mode=\"bypassPermissions\",\n        ),\n    ):\n        if message.type == SDKMessageType.ASSISTANT:\n            print(message.text)\n\nasyncio.run(main())\n```\n\n### Simple blocking prompt\n\n```python\nimport asyncio\nfrom open_agent_sdk import create_agent, AgentOptions\n\nasync def main():\n    agent = create_agent(AgentOptions(model=\"claude-sonnet-4-5\"))\n    result = await agent.prompt(\"What files are in this project?\")\n\n    print(result.text)\n    print(f\"Turns: {result.num_turns}, Tokens: {result.usage.input_tokens + result.usage.output_tokens}\")\n    await agent.close()\n\nasyncio.run(main())\n```\n\n### Multi-turn conversation\n\n```python\nimport asyncio\nfrom open_agent_sdk import create_agent, AgentOptions\n\nasync def main():\n    agent = create_agent(AgentOptions(max_turns=5))\n\n    r1 = await agent.prompt('Create a file /tmp/hello.txt with \"Hello World\"')\n    print(r1.text)\n\n    r2 = await agent.prompt(\"Read back the file you just created\")\n    print(r2.text)\n\n    print(f\"Session messages: {len(agent.get_messages())}\")\n    await agent.close()\n\nasyncio.run(main())\n```\n\n### OpenAI-compatible models\n\n```python\nimport asyncio\nfrom open_agent_sdk import create_agent, AgentOptions\n\nasync def main():\n    # Auto-detects openai-completions from model prefix\n    agent = create_agent(AgentOptions(\n        model=\"gpt-4o\",\n        api_key=\"sk-...\",\n    ))\n    print(f\"API type: {agent.get_api_type()}\")  # openai-completions\n\n    result = await agent.prompt(\"What is 2+2?\")\n    print(result.text)\n    await agent.close()\n\n    # DeepSeek, Qwen, etc.\n    agent2 = create_agent(AgentOptions(\n        model=\"deepseek-chat\",\n        api_key=\"sk-...\",\n        base_url=\"https://api.deepseek.com/v1\",\n    ))\n\n    # Or explicit api_type\n    agent3 = create_agent(AgentOptions(\n        api_type=\"openai-completions\",\n        model=\"my-custom-model\",\n        base_url=\"http://localhost:8000/v1\",\n    ))\n\nasyncio.run(main())\n```\n\n### Custom tools (Pydantic schema)\n\n```python\nimport asyncio\nfrom pydantic import BaseModel\nfrom open_agent_sdk import query, create_sdk_mcp_server, AgentOptions, SDKMessageType\nfrom open_agent_sdk.tool_helper import tool, CallToolResult\n\nclass CityInput(BaseModel):\n    city: str\n\nasync def get_weather_handler(input: CityInput, ctx):\n    return CallToolResult(\n        content=[{\"type\": \"text\", \"text\": f\"{input.city}: 22°C, sunny\"}]\n    )\n\nget_weather = tool(\"get_weather\", \"Get the temperature for a city\", CityInput, get_weather_handler)\nserver = create_sdk_mcp_server(\"weather\", tools=[get_weather])\n\nasync def main():\n    async for msg in query(\n        prompt=\"What is the weather in Tokyo?\",\n        options=AgentOptions(mcp_servers={\"weather\": server}),\n    ):\n        if msg.type == SDKMessageType.RESULT:\n            print(f\"Done: ${msg.total_cost:.4f}\")\n\nasyncio.run(main())\n```\n\n### Custom tools (low-level)\n\n```python\nimport asyncio\nfrom open_agent_sdk import create_agent, AgentOptions\nfrom open_agent_sdk.tool_helper import define_tool\nfrom open_agent_sdk.types import ToolResult, ToolContext\n\nasync def calc_handler(input: dict, ctx: ToolContext) -\u003e ToolResult:\n    result = eval(input[\"expression\"], {\"__builtins__\": {}})\n    return ToolResult(tool_use_id=\"\", content=f\"{input['expression']} = {result}\")\n\ncalculator = define_tool(\n    name=\"Calculator\",\n    description=\"Evaluate a math expression\",\n    input_schema={\n        \"properties\": {\"expression\": {\"type\": \"string\"}},\n        \"required\": [\"expression\"],\n    },\n    handler=calc_handler,\n    read_only=True,\n)\n\nasync def main():\n    agent = create_agent(AgentOptions(tools=[calculator]))\n    r = await agent.prompt(\"Calculate 2**10 * 3\")\n    print(r.text)\n    await agent.close()\n\nasyncio.run(main())\n```\n\n### Skills\n\n```python\nimport asyncio\nfrom open_agent_sdk import create_agent, AgentOptions, SDKMessageType\nfrom open_agent_sdk.skills import register_skill, get_all_skills, init_bundled_skills, SkillDefinition\nfrom open_agent_sdk.types import ToolContext\n\nasync def main():\n    # 5 bundled skills are auto-initialized: commit, review, debug, simplify, test\n    init_bundled_skills()\n    print(f\"Skills: {[s.name for s in get_all_skills()]}\")\n\n    # Register a custom skill\n    async def explain_prompt(args, ctx):\n        return [{\"type\": \"text\", \"text\": f\"Explain simply: {args}\"}]\n\n    register_skill(SkillDefinition(\n        name=\"explain\", description=\"Explain a concept simply\",\n        aliases=[\"eli5\"], user_invocable=True, get_prompt=explain_prompt,\n    ))\n\n    # Agent can invoke skills via the Skill tool\n    agent = create_agent(AgentOptions(max_turns=5))\n    result = await agent.prompt('Use the \"explain\" skill to explain git rebase')\n    print(result.text)\n    await agent.close()\n\nasyncio.run(main())\n```\n\n### MCP server integration\n\n```python\nimport asyncio\nfrom open_agent_sdk import create_agent, AgentOptions, McpStdioConfig\n\nasync def main():\n    agent = create_agent(AgentOptions(\n        mcp_servers={\n            \"filesystem\": McpStdioConfig(\n                command=\"npx\",\n                args=[\"-y\", \"@modelcontextprotocol/server-filesystem\", \"/tmp\"],\n            ),\n        },\n    ))\n\n    result = await agent.prompt(\"List files in /tmp\")\n    print(result.text)\n    await agent.close()\n\nasyncio.run(main())\n```\n\n### Subagents\n\n```python\nimport asyncio\nfrom open_agent_sdk import query, AgentOptions, AgentDefinition, SDKMessageType\n\nasync def main():\n    async for msg in query(\n        prompt=\"Use the code-reviewer agent to review src/\",\n        options=AgentOptions(\n            agents={\n                \"code-reviewer\": AgentDefinition(\n                    description=\"Expert code reviewer\",\n                    prompt=\"Analyze code quality. Focus on security and performance.\",\n                    tools=[\"Read\", \"Glob\", \"Grep\"],\n                ),\n            },\n        ),\n    ):\n        if msg.type == SDKMessageType.RESULT:\n            print(\"Done\")\n\nasyncio.run(main())\n```\n\n### Permissions\n\n```python\nimport asyncio\nfrom open_agent_sdk import query, AgentOptions, SDKMessageType\n\nasync def main():\n    # Read-only agent — can only analyze, not modify\n    async for msg in query(\n        prompt=\"Review the code in src/ for best practices.\",\n        options=AgentOptions(\n            allowed_tools=[\"Read\", \"Glob\", \"Grep\"],\n            permission_mode=\"dontAsk\",\n        ),\n    ):\n        pass\n\nasyncio.run(main())\n```\n\n### Web UI\n\nA built-in web chat interface is included for testing:\n\n```bash\npython examples/web/server.py\n# Open http://localhost:8083\n```\n\n## API reference\n\n### Top-level functions\n\n| Function                                  | Description                                          |\n| ----------------------------------------- | ---------------------------------------------------- |\n| `query(prompt, options)`                  | One-shot streaming query, returns `AsyncGenerator`   |\n| `create_agent(options)`                   | Create a reusable agent with session persistence     |\n| `tool(name, desc, model, handler)`        | Create a tool with Pydantic schema validation        |\n| `define_tool(name, ...)`                  | Low-level tool definition helper                     |\n| `create_sdk_mcp_server(name, tools)`      | Bundle tools into an in-process MCP server           |\n| `create_provider(api_type, ...)`          | Create LLM provider (Anthropic or OpenAI)            |\n| `get_all_base_tools()`                    | Get all 35 built-in tools                            |\n| `register_skill(definition)`              | Register a custom skill                              |\n| `get_all_skills()`                        | List all registered skills                           |\n| `init_bundled_skills()`                   | Initialize 5 bundled skills                          |\n| `list_sessions()`                         | List persisted sessions                              |\n| `get_session_messages(id)`                | Retrieve messages from a session                     |\n| `fork_session(id)`                        | Fork a session for branching                         |\n\n### Agent methods\n\n| Method                                   | Description                                         |\n| ---------------------------------------- | --------------------------------------------------- |\n| `await agent.query(prompt)`              | Streaming query, returns `AsyncGenerator[SDKMessage]`|\n| `await agent.prompt(text)`               | Blocking query, returns `QueryResult`               |\n| `agent.get_messages()`                   | Get conversation history                            |\n| `agent.get_api_type()`                   | Get resolved API type (`anthropic-messages` / `openai-completions`) |\n| `agent.clear()`                          | Reset session                                       |\n| `await agent.interrupt()`                | Abort current query                                 |\n| `await agent.set_model(model)`           | Change model mid-session                            |\n| `await agent.set_permission_mode(mode)`  | Change permission mode                              |\n| `await agent.close()`                    | Close MCP connections, persist session               |\n\n### Options (`AgentOptions`)\n\n| Option               | Type                                | Default                       | Description                                                             |\n| -------------------- | ----------------------------------- | ----------------------------- | ----------------------------------------------------------------------- |\n| `model`              | `str`                               | `claude-sonnet-4-5`           | LLM model ID (or set `CODEANY_MODEL` env var)                          |\n| `api_type`           | `str`                               | auto                          | `anthropic-messages` or `openai-completions` (auto-detected from model) |\n| `api_key`            | `str`                               | `CODEANY_API_KEY`             | API key                                                                 |\n| `base_url`           | `str`                               | —                             | Custom API endpoint                                                     |\n| `cwd`                | `str`                               | `os.getcwd()`                 | Working directory                                                       |\n| `system_prompt`      | `str`                               | —                             | System prompt override                                                  |\n| `append_system_prompt` | `str`                             | —                             | Append to default system prompt                                         |\n| `tools`              | `list[BaseTool]`                    | All built-in                  | Additional custom tools                                                 |\n| `allowed_tools`      | `list[str]`                         | —                             | Tool allow-list                                                         |\n| `disallowed_tools`   | `list[str]`                         | —                             | Tool deny-list                                                          |\n| `permission_mode`    | `PermissionMode`                    | `bypassPermissions`           | `default` / `acceptEdits` / `dontAsk` / `bypassPermissions` / `plan`    |\n| `can_use_tool`       | `CanUseToolFn`                      | —                             | Custom permission callback                                              |\n| `max_turns`          | `int`                               | `10`                          | Max agentic turns                                                       |\n| `max_budget_usd`     | `float`                             | —                             | Spending cap                                                            |\n| `max_tokens`         | `int`                               | `16000`                       | Max output tokens                                                       |\n| `thinking`           | `ThinkingConfig`                    | —                             | Extended thinking                                                       |\n| `mcp_servers`        | `dict[str, McpServerConfig]`        | —                             | MCP server connections                                                  |\n| `agents`             | `dict[str, AgentDefinition]`        | —                             | Subagent definitions                                                    |\n| `hooks`              | `dict[str, list[dict]]`             | —                             | Lifecycle hooks                                                         |\n| `resume`             | `str`                               | —                             | Resume session by ID                                                    |\n| `continue_session`   | `bool`                              | `False`                       | Continue most recent session                                            |\n| `persist_session`    | `bool`                              | `False`                       | Persist session to disk                                                 |\n| `session_id`         | `str`                               | auto                          | Explicit session ID                                                     |\n| `json_schema`        | `dict`                              | —                             | Structured output                                                       |\n| `sandbox`            | `bool`                              | `False`                       | Filesystem/network sandbox                                              |\n| `env`                | `dict[str, str]`                    | —                             | Environment variables                                                   |\n| `debug`              | `bool`                              | `False`                       | Enable debug output                                                     |\n\n### Environment variables\n\n| Variable             | Description                                       |\n| -------------------- | ------------------------------------------------- |\n| `CODEANY_API_KEY`    | API key (required)                                |\n| `CODEANY_MODEL`      | Default model override                            |\n| `CODEANY_BASE_URL`   | Custom API endpoint                               |\n| `CODEANY_API_TYPE`   | `anthropic-messages` or `openai-completions`      |\n\n## Multi-provider support\n\nThe SDK uses a unified provider abstraction. Internally all messages use Anthropic format as the canonical representation. The provider layer handles conversion automatically:\n\n```\nYour Code → Agent → QueryEngine → Provider Layer → LLM API\n                                       │\n                        ┌──────────────┴──────────────┐\n                        │   AnthropicProvider          │\n                        │   Direct pass-through        │\n                        ├─────────────────────────────┤\n                        │   OpenAIProvider             │\n                        │   Anthropic ↔ OpenAI format  │\n                        └─────────────────────────────┘\n```\n\n**Message format conversion (OpenAI provider):**\n\n| Anthropic (internal)                  | OpenAI (wire)                                    |\n| ------------------------------------- | ------------------------------------------------ |\n| `system` prompt string                | `{\"role\": \"system\", \"content\": \"...\"}`           |\n| `tool_use` content block              | `tool_calls[].function`                          |\n| `tool_result` content block           | `{\"role\": \"tool\", \"tool_call_id\": \"...\"}`        |\n| `stop_reason: \"end_turn\"`             | `finish_reason: \"stop\"`                          |\n| `stop_reason: \"tool_use\"`             | `finish_reason: \"tool_calls\"`                    |\n| `stop_reason: \"max_tokens\"`           | `finish_reason: \"length\"`                        |\n\n**Auto-detection**: Models starting with `gpt-`, `deepseek-`, `qwen-`, `o1-`, `o3-`, `o4-` automatically use `openai-completions`. Override with `api_type` option or `CODEANY_API_TYPE` env var.\n\n## Built-in tools\n\n| Tool                                       | Description                                  |\n| ------------------------------------------ | -------------------------------------------- |\n| **Bash**                                   | Execute shell commands                       |\n| **Read**                                   | Read files with line numbers                 |\n| **Write**                                  | Create / overwrite files                     |\n| **Edit**                                   | Precise string replacement in files          |\n| **Glob**                                   | Find files by pattern                        |\n| **Grep**                                   | Search file contents with regex              |\n| **WebFetch**                               | Fetch and parse web content                  |\n| **WebSearch**                              | Search the web                               |\n| **NotebookEdit**                           | Edit Jupyter notebook cells                  |\n| **Agent**                                  | Spawn subagents for parallel work            |\n| **Skill**                                  | Invoke registered skills by name             |\n| **TaskCreate/List/Update/Get/Stop/Output** | Task management system                       |\n| **TeamCreate/Delete**                      | Multi-agent team coordination                |\n| **SendMessage**                            | Inter-agent messaging                        |\n| **EnterWorktree/ExitWorktree**             | Git worktree isolation                       |\n| **EnterPlanMode/ExitPlanMode**             | Structured planning workflow                 |\n| **AskUserQuestion**                        | Ask the user for input                       |\n| **ToolSearch**                             | Discover lazy-loaded tools                   |\n| **ListMcpResources/ReadMcpResource**       | MCP resource access                          |\n| **CronCreate/Delete/List**                 | Scheduled task management                    |\n| **RemoteTrigger**                          | Remote agent triggers                        |\n| **LSP**                                    | Language Server Protocol (code intelligence) |\n| **Config**                                 | Dynamic configuration                        |\n| **TodoWrite**                              | Session todo list                            |\n\n## Bundled skills\n\n| Skill        | Aliases              | Description                                              |\n| ------------ | -------------------- | -------------------------------------------------------- |\n| **commit**   | `ci`                 | Create git commit with well-crafted message              |\n| **review**   | `review-pr`, `cr`    | Review code changes for correctness, security, style     |\n| **debug**    | `investigate`, `diagnose` | Systematic debugging with structured investigation  |\n| **simplify** | —                    | Review changed code for reuse, quality, efficiency       |\n| **test**     | `run-tests`          | Run tests and analyze/fix failures                       |\n\n## Architecture\n\n```\n┌──────────────────────────────────────────────────────┐\n│                   Your Application                    │\n│                                                       │\n│   from open_agent_sdk import create_agent              │\n└────────────────────────┬─────────────────────────────┘\n                         │\n              ┌──────────▼──────────┐\n              │       Agent         │  Session state, tool pool,\n              │ query() / prompt()  │  MCP connections, skills\n              └──────────┬──────────┘\n                         │\n              ┌──────────▼──────────┐\n              │    QueryEngine      │  Agentic loop:\n              │  submit_message()   │  API call → tools → repeat\n              └──────────┬──────────┘\n                         │\n         ┌───────────────┼───────────────┐\n         │               │               │\n   ┌─────▼─────┐  ┌─────▼─────┐  ┌─────▼─────┐\n   │ Providers │  │  35 Tools │  │    MCP     │\n   │ Anthropic │  │ Bash,Read │  │  Servers   │\n   │  OpenAI   │  │ Edit,...  │  │ stdio/SSE/ │\n   │ DeepSeek  │  │ Skill,... │  │ HTTP/SDK   │\n   └───────────┘  └───────────┘  └───────────┘\n```\n\n**Key internals:**\n\n| Component             | Description                                                      |\n| --------------------- | ---------------------------------------------------------------- |\n| **Provider layer**    | Anthropic + OpenAI-compatible (DeepSeek, Qwen, vLLM, Ollama)    |\n| **QueryEngine**       | Core agentic loop with auto-compact, retry, tool orchestration   |\n| **Skill system**      | 5 bundled skills (commit, review, debug, simplify, test) + custom |\n| **Auto-compact**      | Summarizes conversation when context window fills up             |\n| **Micro-compact**     | Truncates oversized tool results                                 |\n| **Retry**             | Exponential backoff for rate limits and transient errors         |\n| **Token estimation**  | Rough token counting for budget and compaction thresholds        |\n| **File cache**        | LRU cache for file reads                                         |\n| **Hook system**       | 20 lifecycle events (PreToolUse, PostToolUse, SessionStart, ...) |\n| **Session storage**   | Persist / resume / fork sessions on disk                         |\n| **Context injection** | Git status + AGENT.md automatically injected into system prompt  |\n\n## Examples\n\n| #   | File                                    | Description                                     |\n| --- | --------------------------------------- | ----------------------------------------------- |\n| 01  | `examples/01_simple_query.py`           | Streaming query with event handling              |\n| 02  | `examples/02_multi_tool.py`             | Multi-tool orchestration (Glob + Bash)           |\n| 03  | `examples/03_multi_turn.py`             | Multi-turn session persistence                   |\n| 04  | `examples/04_prompt_api.py`             | Blocking `prompt()` API                          |\n| 05  | `examples/05_custom_system_prompt.py`   | Custom system prompt                             |\n| 06  | `examples/06_mcp_server.py`             | MCP server integration                           |\n| 07  | `examples/07_custom_tools.py`           | Custom tools with `define_tool()`                |\n| 08  | `examples/08_official_api_compat.py`    | `query()` API pattern                            |\n| 09  | `examples/09_subagents.py`              | Subagent delegation                              |\n| 10  | `examples/10_permissions.py`            | Read-only agent with tool restrictions           |\n| 11  | `examples/11_custom_mcp_tools.py`       | `tool()` + `create_sdk_mcp_server()`             |\n| 12  | `examples/12_skills.py`                 | Skill system usage (register, invoke, list)      |\n| 13  | `examples/13_hooks.py`                  | Lifecycle hook configuration and execution       |\n| 14  | `examples/14_openai_compat.py`          | OpenAI/compatible model support (DeepSeek, etc.) |\n| web | `examples/web/`                         | Web chat UI for testing                          |\n\nRun any example:\n\n```bash\npython examples/01_simple_query.py\n```\n\nStart the web UI:\n\n```bash\npython examples/web/server.py\n# Open http://localhost:8083\n```\n\n## Project structure\n\n```\nopen-agent-sdk-python/\n├── src/open_agent_sdk/\n│   ├── __init__.py         # Public exports\n│   ├── agent.py            # Agent high-level API\n│   ├── engine.py           # QueryEngine agentic loop\n│   ├── types.py            # Core type definitions\n│   ├── session.py          # Session persistence\n│   ├── hooks.py            # Hook system (20 lifecycle events)\n│   ├── tool_helper.py      # Pydantic-based tool creation\n│   ├── sdk_mcp_server.py   # In-process MCP server factory\n│   ├── providers/\n│   │   ├── types.py        # LLMProvider interface\n│   │   ├── anthropic_provider.py  # Anthropic implementation\n│   │   ├── openai_provider.py     # OpenAI-compatible (no SDK dependency)\n│   │   └── factory.py     # create_provider() factory\n│   ├── skills/\n│   │   ├── types.py        # SkillDefinition, SkillResult\n│   │   ├── registry.py     # Skill registry (register, lookup, format)\n│   │   └── bundled/        # 5 bundled skills (commit, review, debug, simplify, test)\n│   ├── mcp/\n│   │   └── client.py       # MCP client (stdio/SSE/HTTP)\n│   ├── tools/              # 35 built-in tools\n│   │   ├── bash.py, read.py, write.py, edit.py\n│   │   ├── glob_tool.py, grep.py, web_fetch.py, web_search.py\n│   │   ├── agent_tool.py, skill_tool.py, send_message.py\n│   │   ├── task_tools.py, team_tools.py, worktree_tools.py\n│   │   ├── plan_tools.py, cron_tools.py, lsp_tool.py\n│   │   └── config_tool.py, todo_tool.py, ...\n│   └── utils/\n│       ├── messages.py     # Message creation \u0026 normalization\n│       ├── tokens.py       # Token estimation \u0026 cost (Anthropic + OpenAI + DeepSeek + Qwen)\n│       ├── compact.py      # Auto-compaction logic\n│       ├── retry.py        # Exponential backoff retry\n│       ├── context.py      # Git \u0026 project context injection\n│       └── file_cache.py   # LRU file state cache\n├── tests/                  # 265 tests\n├── examples/               # 14 examples + web UI\n└── pyproject.toml\n```\n\n## Links\n\n- Website: [codeany.ai](https://codeany.ai)\n- TypeScript SDK: [github.com/codeany-ai/open-agent-sdk-typescript](https://github.com/codeany-ai/open-agent-sdk-typescript)\n- Go SDK: [github.com/codeany-ai/open-agent-sdk-go](https://github.com/codeany-ai/open-agent-sdk-go)\n- Issues: [github.com/codeany-ai/open-agent-sdk-python/issues](https://github.com/codeany-ai/open-agent-sdk-python/issues)\n\n## License\n\nMIT\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcodeany-ai%2Fopen-agent-sdk-python","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcodeany-ai%2Fopen-agent-sdk-python","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcodeany-ai%2Fopen-agent-sdk-python/lists"}