{"id":48467324,"url":"https://github.com/tiyagents/tiycore","last_synced_at":"2026-04-26T14:00:53.142Z","repository":{"id":345063859,"uuid":"1183555318","full_name":"TiyAgents/tiycore","owner":"TiyAgents","description":"Unified LLM API and stateful Agent runtime in Rust","archived":false,"fork":false,"pushed_at":"2026-04-23T05:02:05.000Z","size":677,"stargazers_count":1,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"master","last_synced_at":"2026-04-23T07:02:25.263Z","etag":null,"topics":["agent-runtime","anthropic","gemini","ollama","openai","unified-llm-api"],"latest_commit_sha":null,"homepage":"https://tiyagents.github.io/tiycore/","language":"Rust","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/TiyAgents.png","metadata":{"files":{"readme":"README-ZH.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":"AGENTS.md","dco":null,"cla":null}},"created_at":"2026-03-16T18:15:40.000Z","updated_at":"2026-04-23T05:02:05.000Z","dependencies_parsed_at":"2026-04-23T07:01:16.433Z","dependency_job_id":null,"html_url":"https://github.com/TiyAgents/tiycore","commit_stats":null,"previous_names":["tiyagents/tiy-core","tiyagents/tiycore"],"tags_count":33,"template":false,"template_full_name":null,"purl":"pkg:github/TiyAgents/tiycore","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/TiyAgents%2Ftiycore","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/TiyAgents%2Ftiycore/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/TiyAgents%2Ftiycore/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/TiyAgents%2Ftiycore/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/TiyAgents","download_url":"https://codeload.github.com/TiyAgents/tiycore/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/TiyAgents%2Ftiycore/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":32299644,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-26T09:34:17.070Z","status":"ssl_error","status_checked_at":"2026-04-26T09:34:00.993Z","response_time":129,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agent-runtime","anthropic","gemini","ollama","openai","unified-llm-api"],"created_at":"2026-04-07T05:02:25.970Z","updated_at":"2026-04-26T14:00:53.125Z","avatar_url":"https://github.com/TiyAgents.png","language":"Rust","readme":"\u003cdiv align=\"center\"\u003e\n\n# tiycore\n\n**统一 LLM API 与有状态 Agent 运行时 (Rust)**\n\n[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg?style=flat-square)](https://opensource.org/licenses/MIT)\n[![Rust](https://img.shields.io/badge/Rust-2021_Edition-orange.svg?style=flat-square\u0026logo=rust)](https://www.rust-lang.org/)\n[![Crate](https://img.shields.io/badge/crate-tiycore-green.svg?style=flat-square)](https://github.com/TiyAgents/tiycore)\n\n[English](./README.md) | [中文](./README-ZH.md)\n\n\u003c/div\u003e\n\n---\n\ntiycore 是一个 Rust 库，提供统一的、与提供商无关的流式 LLM 补全接口，以及自主的 Agent 工具调用循环。只需编写一次应用逻辑，即可通过修改配置在 OpenAI、Anthropic、Google、Ollama 及 9+ 个其他提供商之间自由切换。\n\n## 核心特性\n\n- **一套接口，多个提供商** — 5 个协议级实现（OpenAI Completions、OpenAI Responses、Anthropic Messages、Google Generative AI / Vertex AI、Ollama）和 10 个代理提供商（OpenAI-Compatible、xAI、Groq、OpenRouter、DeepSeek、MiniMax、Kimi Coding、ZAI、Zenmux、OpenCode Go），统一在单一 `LLMProtocol` trait 之下。\n- **流式优先** — `EventStream\u003cT, R\u003e` 基于 `parking_lot::Mutex\u003cVecDeque\u003e` 实现 `futures::Stream`。每个提供商返回 `AssistantMessageEventStream`，包含细粒度的增量事件：文本、思维链、工具调用参数和完成事件。\n- **工具 / 函数调用** — 通过 JSON Schema 定义工具，使用 `jsonschema` crate 验证参数，支持在 Agent 循环中并行或串行执行工具。\n- **有状态 Agent 运行时** — `Agent` 管理完整对话循环：流式 LLM → 检测工具调用 → 执行工具 → 重新请求 → 循环。支持引导中断（steering）、后续消息队列、事件订阅（观察者模式）、中止操作，以及可配置的最大轮次（默认 25）。\n- **扩展思维链** — 提供商特定的思维/推理支持，统一的 `ThinkingLevel` 枚举（Off → XHigh，支持 OpenAI GPT-5 和 Anthropic Opus 4.7+）以及 `ThinkingDisplay` 枚举（`Summarized` / `Omitted`）用于控制是否在响应中包含思维内容。跨提供商的思维块转换在消息变换中自动处理。\n- **默认线程安全** — 所有可变状态使用 `parking_lot` 锁和 `AtomicBool`，无中毒语义的并发设计。\n\n## 架构\n\n```mermaid\ngraph TD\n    A[你的应用] --\u003e B[Agent]\n    A --\u003e C[LLMProtocol trait]\n    B --\u003e C\n    C --\u003e D[协议提供商]\n    C --\u003e E[代理提供商]\n    D --\u003e D1[OpenAI Completions]\n    D --\u003e D2[OpenAI Responses]\n    D --\u003e D3[Anthropic Messages]\n    D --\u003e D4[Google GenAI / Vertex]\n    D --\u003e D5[Ollama]\n    E --\u003e E1[OpenAI-Compatible → OpenAI Completions]\n    E --\u003e E2[xAI → OpenAI Completions]\n    E --\u003e E3[Groq → OpenAI Completions]\n    E --\u003e E4[OpenRouter → OpenAI Completions]\n    E --\u003e E5[ZAI → OpenAI Completions]\n    E --\u003e E6[DeepSeek → OpenAI Completions]\n    E --\u003e E7[MiniMax → Anthropic]\n    E --\u003e E8[Kimi Coding → Anthropic]\n    E --\u003e E9[Zenmux → 自适应路由]\n    E --\u003e E10[OpenCode Go → 自适应路由]\n```\n\n### 核心层\n\n| 层 | 路径 | 职责 |\n|---|---|---|\n| **Types** | `src/types/` | 与提供商无关的数据模型：`Message`、`ContentBlock`、`Model`、`Tool`、`Context`、`SecurityConfig` |\n| **Protocol** | `src/protocol/` | 线路格式实现（[完整文档](./src/protocol/README.md)） |\n| **Provider** | `src/provider/` | 服务商门面（[完整文档](./src/provider/README.md)） |\n| **Stream** | `src/stream/` | 通用 `EventStream\u003cT, R\u003e`，实现 `futures::Stream` |\n| **Agent** | `src/agent/` | 有状态对话管理器，含工具执行循环（[完整文档](./src/agent/README.md)） |\n| **Transform** | `src/transform/` | 跨提供商消息变换（思维块转换、工具调用 ID 规范化、孤儿工具调用处理） |\n| **Thinking** | `src/thinking/` | `ThinkingLevel` 枚举、`ThinkingDisplay` 枚举及提供商特定的思维选项 |\n| **Validation** | `src/validation/` | 工具参数 JSON Schema 验证 |\n| **Models** | `src/models/` | `ModelRegistry`，内置预定义模型（GPT-4o、Claude Sonnet 4、Gemini 2.5 Flash 等） |\n| **Catalog** | `src/catalog/` | 原生模型列表抓取、快照刷新，以及面向展示的元数据补全（[完整文档](./src/catalog/README.md)） |\n\n## 快速开始\n\n在 `Cargo.toml` 中添加依赖：\n\n```toml\n[dependencies]\ntiycore = \"0.1.0\"\ntokio = { version = \"1\", features = [\"full\"] }\nfutures = \"0.3\"\n```\n\n在正式发布前，本地联调仍可使用：\n\n```toml\n[dependencies]\ntiycore = { path = \"../tiycore\" }\n```\n\n### 流式补全\n\n```rust\nuse futures::StreamExt;\nuse tiycore::{\n    provider::get_provider,\n    types::*,\n};\n\n#[tokio::main]\nasync fn main() {\n    // 构建模型\n    let model = Model::builder()\n        .id(\"gpt-4o-mini\")\n        .name(\"GPT-4o Mini\")\n        .provider(Provider::OpenAI)\n        .context_window(128000)\n        .max_tokens(16384)\n        .build()\n        .unwrap();\n\n    // 创建包含消息的上下文\n    let context = Context {\n        system_prompt: Some(\"You are a helpful assistant.\".to_string()),\n        messages: vec![Message::User(UserMessage::text(\"法国的首都是什么？\"))],\n        tools: None,\n    };\n\n    // 从 model 解析提供商并流式获取响应\n    // （提供商在首次访问时自动注册 — 无需手动设置）\n    let provider = get_provider(\u0026model.provider).unwrap();\n    let options = StreamOptions {\n        api_key: Some(std::env::var(\"OPENAI_API_KEY\").unwrap()),\n        ..Default::default()\n    };\n    let mut stream = provider.stream(\u0026model, \u0026context, options);\n\n    while let Some(event) = stream.next().await {\n        match event {\n            AssistantMessageEvent::TextDelta { delta, .. } =\u003e print!(\"{delta}\"),\n            AssistantMessageEvent::Done { message, .. } =\u003e {\n                println!(\"\\n--- 输入 {} tokens，输出 {} tokens ---\",\n                    message.usage.input, message.usage.output);\n            }\n            AssistantMessageEvent::Error { error, .. } =\u003e {\n                eprintln!(\"错误: {:?}\", error.error_message);\n            }\n            _ =\u003e {}\n        }\n    }\n}\n```\n\n### Agent 工具调用\n\n```rust\nuse tiycore::{\n    agent::{Agent, AgentTool, AgentToolResult},\n    types::*,\n};\n\n#[tokio::main]\nasync fn main() {\n    let agent = Agent::with_model(\n        Model::builder()\n            .id(\"gpt-4o-mini\").name(\"GPT-4o Mini\")\n            .provider(Provider::OpenAI)\n            .context_window(128000).max_tokens(16384)\n            .build().unwrap(),\n    );\n\n    agent.set_api_key(std::env::var(\"OPENAI_API_KEY\").unwrap());\n    agent.set_system_prompt(\"你是一个有工具访问权限的助手。\");\n    agent.set_tools(vec![AgentTool::new(\n        \"get_weather\", \"获取天气\", \"获取指定城市的当前天气\",\n        serde_json::json!({\n            \"type\": \"object\",\n            \"properties\": { \"city\": { \"type\": \"string\", \"description\": \"城市名称\" } },\n            \"required\": [\"city\"]\n        }),\n    )]);\n    agent.set_tool_executor_simple(|name, _id, args| {\n        let name = name.to_string();\n        let args = args.clone();\n        async move {\n            match name.as_str() {\n                \"get_weather\" =\u003e {\n                    let city = args[\"city\"].as_str().unwrap_or(\"未知\");\n                    AgentToolResult::text(format!(\"{city} 的天气：22°C，晴\"))\n                }\n                _ =\u003e AgentToolResult::error(format!(\"未知工具: {name}\")),\n            }\n        }\n    });\n\n    // Agent 自动循环：LLM → 工具调用 → 执行 → 重新请求 → 直至完成\n    let messages = agent.prompt(\"东京的天气怎么样？\").await.unwrap();\n    println!(\"Agent 产生了 {} 条消息\", messages.len());\n}\n```\n\nAgent 还支持钩子（beforeToolCall / afterToolCall / onPayload）、上下文管道（transformContext / convertToLlm）、事件订阅、引导 / 后续消息队列、思维链预算、自定义 HTTP 头（`AgentConfig` 的 `custom_headers` 字段）、自定义消息等更多能力。详见 **[Agent 模块完整文档](./src/agent/README.md)**。\n\n## 支持的提供商\n\n| 提供商 | 类型 | 环境变量 |\n|---|---|---|\n| OpenAI | 直接 | `OPENAI_API_KEY` |\n| Anthropic | 直接 | `ANTHROPIC_API_KEY` |\n| Google | 直接 | `GOOGLE_API_KEY` |\n| Ollama | 直接 | — |\n| OpenAI-Compatible | 委托 → OpenAI Completions | `OPENAI_API_KEY` |\n| xAI | 委托 → OpenAI Completions | `XAI_API_KEY` |\n| Groq | 委托 → OpenAI Completions | `GROQ_API_KEY` |\n| OpenRouter | 委托 → OpenAI Completions | `OPENROUTER_API_KEY` |\n| ZAI | 委托 → OpenAI Completions | `ZAI_API_KEY` |\n| DeepSeek | 委托 → OpenAI Completions | `DEEPSEEK_API_KEY` |\n| MiniMax | 委托 → Anthropic | `MINIMAX_API_KEY` |\n| Kimi Coding | 委托 → Anthropic | `KIMI_API_KEY` |\n| Zenmux | 自适应多协议 | `ZENMUX_API_KEY` |\n| OpenCode Go | 自适应多协议 | `OPENCODE_GO_API_KEY` |\n\n关于提供商配置详情、兼容性标志、Zenmux 自适应路由及如何添加新提供商，请参见 **[Provider 完整文档](./src/provider/README.md)**。\n\n关于线路格式协议内部原理（SSE 解析、请求构建、委托宏），请参见 **[Protocol 完整文档](./src/protocol/README.md)**。\n\n## 发布\n\n如果希望其他项目通过版本号依赖 `tiycore`，而不是填写 Git 仓库地址，需要先将该 crate 发布到 crates.io：\n\n```bash\ncargo login\ncargo package\ncargo publish\n```\n\n发布完成后，使用方即可直接这样写：\n\n```toml\n[dependencies]\ntiycore = \"0.1.0\"\n```\n\n## API Key 解析优先级\n\nKey 按以下优先级解析：\n\n1. `StreamOptions.api_key`（逐请求覆盖）\n2. 提供商的 `default_api_key()` 方法\n3. 环境变量（如 `OPENAI_API_KEY`、`ANTHROPIC_API_KEY`）\n\nBase URL 遵循相同模式：`StreamOptions.base_url` \u003e `model.base_url` \u003e 提供商的 `DEFAULT_BASE_URL`。\n\n## 安全配置\n\ntiycore 内置了集中式 `SecurityConfig` 结构体，统一管理所有安全限制和策略。每个字段都有安全的默认值 — 你只需覆盖想要修改的部分。\n\n### 启用安全配置\n\n**代码中使用（编程方式）：**\n\n```rust\nuse tiycore::types::{SecurityConfig, HttpLimits, AgentLimits, StreamOptions};\n\n// 方式一：使用默认值（零配置）\nlet options = StreamOptions::default();\n// options.security 为 None → 自动使用所有默认值\n\n// 方式二：覆盖特定值\nlet security = SecurityConfig::default()\n    .with_http(HttpLimits {\n        connect_timeout_secs: 10,\n        request_timeout_secs: 600,\n        ..Default::default()\n    })\n    .with_agent(AgentLimits {\n        max_messages: 500,\n        max_parallel_tool_calls: 8,\n        ..Default::default()\n    });\n\nlet options = StreamOptions {\n    api_key: Some(\"sk-...\".to_string()),\n    security: Some(security),\n    ..Default::default()\n};\n```\n\n**从 JSON 配置文件加载：**\n\n```rust\nuse tiycore::types::SecurityConfig;\n\n// 从文件加载 — 仅覆盖指定字段，其余使用默认值\nlet json = std::fs::read_to_string(\"security.json\").unwrap();\nlet security: SecurityConfig = serde_json::from_str(\u0026json).unwrap();\n```\n\n**从 TOML 配置文件加载（需要 `toml` crate）：**\n\n```rust\nlet toml_str = std::fs::read_to_string(\"security.toml\").unwrap();\nlet security: SecurityConfig = toml::from_str(\u0026toml_str).unwrap();\n```\n\n### JSON 配置参考\n\n完整的 `security.json`，包含所有字段及其默认值：\n\n```jsonc\n{\n  // HTTP 客户端和 SSE 流解析限制（每次 Provider 请求时生效）\n  \"http\": {\n    \"connect_timeout_secs\": 30,           // TCP 连接超时（秒）\n    \"request_timeout_secs\": 1800,         // 请求总超时，含流式传输（30 分钟）\n    \"max_sse_line_buffer_bytes\": 2097152, // SSE 行缓冲区上限，防止 OOM（2 MiB）\n    \"max_error_body_bytes\": 65536,        // 错误响应体最大读取字节数（64 KiB）\n    \"max_error_message_chars\": 4096       // 存入事件的错误消息最大字符数\n  },\n\n  // Agent 运行时限制\n  \"agent\": {\n    \"max_messages\": 1000,                 // 对话历史上限（0 = 无限，超出后 FIFO 丢弃最旧消息）\n    \"max_parallel_tool_calls\": 16,        // 并行工具执行上限\n    \"tool_execution_timeout_secs\": 120,   // 单次工具执行超时（2 分钟）\n    \"validate_tool_calls\": true,          // 执行前是否校验工具参数的 JSON Schema\n    \"max_subscriber_slots\": 128           // 最大事件订阅者槽位数\n  },\n\n  // EventStream 基础设施限制\n  \"stream\": {\n    \"max_event_queue_size\": 10000,        // 事件队列缓冲上限（0 = 无限）\n    \"result_timeout_secs\": 600            // EventStream::result() 阻塞超时（10 分钟）\n  },\n\n  // 请求头安全策略 — 防止自定义 Header 覆盖认证头\n  \"headers\": {\n    \"protected_headers\": [\n      \"authorization\",\n      \"x-api-key\",\n      \"x-goog-api-key\",\n      \"anthropic-version\",\n      \"anthropic-beta\"\n    ]\n  },\n\n  // Base URL 验证策略（SSRF 防护）\n  \"url\": {\n    \"require_https\": true,                // 强制 HTTPS（localhost/127.0.0.1 豁免）\n    \"block_private_ips\": false,           // 是否阻止私有/回环 IP（默认关闭，方便本地开发）\n    \"allowed_schemes\": [\"https\", \"http\"], // 允许的 URL scheme\n    \"https_exempt_hosts\": []              // 豁免 HTTPS 要求的主机名（如 [\".oa.com\", \"llm.internal\"]）\n  }\n}\n```\n\n\u003e **部分覆盖：** 你只需包含想要修改的字段，省略的字段和整个 section 会自动使用默认值。例如 `{}` 表示全部使用默认值，`{\"http\": {\"connect_timeout_secs\": 10}}` 只修改连接超时。\n\n### TOML 配置参考\n\n同样的配置，TOML 格式：\n\n```toml\n[http]\nconnect_timeout_secs = 30\nrequest_timeout_secs = 1800\nmax_sse_line_buffer_bytes = 2097152\nmax_error_body_bytes = 65536\nmax_error_message_chars = 4096\n\n[agent]\nmax_messages = 1000\nmax_parallel_tool_calls = 16\ntool_execution_timeout_secs = 120\nvalidate_tool_calls = true\nmax_subscriber_slots = 128\n\n[stream]\nmax_event_queue_size = 10000\nresult_timeout_secs = 600\n\n[headers]\nprotected_headers = [\n  \"authorization\",\n  \"x-api-key\",\n  \"x-goog-api-key\",\n  \"anthropic-version\",\n  \"anthropic-beta\",\n]\n\n[url]\nrequire_https = true\nblock_private_ips = false\nallowed_schemes = [\"https\", \"http\"]\nhttps_exempt_hosts = []\n```\n\n### 默认值速查表\n\n| 分组 | 字段 | 默认值 | 说明 |\n|---|---|---|---|\n| **http** | `connect_timeout_secs` | `30` | TCP 连接超时 |\n| | `request_timeout_secs` | `1800` | 请求总超时（30 分钟） |\n| | `max_sse_line_buffer_bytes` | `2097152` | SSE 缓冲区上限（2 MiB） |\n| | `max_error_body_bytes` | `65536` | 错误响应体读取上限（64 KiB） |\n| | `max_error_message_chars` | `4096` | 错误消息截断长度 |\n| **agent** | `max_messages` | `1000` | 历史消息上限（0 = 无限） |\n| | `max_parallel_tool_calls` | `16` | 并行工具执行上限 |\n| | `tool_execution_timeout_secs` | `120` | 单工具超时（2 分钟） |\n| | `validate_tool_calls` | `true` | JSON Schema 校验 |\n| | `max_subscriber_slots` | `128` | 订阅者槽位 |\n| **stream** | `max_event_queue_size` | `10000` | 事件队列上限（0 = 无限） |\n| | `result_timeout_secs` | `600` | Result 阻塞超时（10 分钟） |\n| **headers** | `protected_headers` | `[\"authorization\", ...]` | 不可被覆盖 |\n| **url** | `require_https` | `true` | 强制 HTTPS（localhost 豁免） |\n| | `block_private_ips` | `false` | 私有 IP 阻断 |\n| | `allowed_schemes` | `[\"https\", \"http\"]` | 允许的 URL scheme |\n| | `https_exempt_hosts` | `[]` | 豁免 HTTPS 的主机名（支持前缀点号后缀匹配） |\n\n## 构建与测试\n\n```bash\ncargo build                          # 构建库\ncargo test                           # 运行所有测试\ncargo test test_agent_state_new      # 按名称运行单个测试\ncargo test -- --nocapture            # 显示测试输出\ncargo fmt                            # 格式化代码\ncargo clippy                         # 代码检查\n\n# 运行示例（需要 API Key）\ncargo run --example basic_usage\ncargo run --example agent_example\n```\n\n## 项目结构\n\n```\nsrc/\n├── lib.rs              # Crate 根，公共 re-exports\n├── types/              # 与提供商无关的数据模型\n│   ├── model.rs        # Model, Provider, Api, Cost, OpenAICompletionsCompat\n│   ├── message.rs      # Message (User/Assistant/ToolResult), StopReason\n│   ├── content.rs      # ContentBlock (Text/Thinking/ToolCall/Image)\n│   ├── context.rs      # Context, Tool, StreamOptions\n│   ├── limits.rs       # SecurityConfig, HttpLimits, AgentLimits, StreamLimits, UrlPolicy, HeaderPolicy\n│   ├── events.rs       # AssistantMessageEvent（流式事件）\n│   └── usage.rs        # Token 用量追踪\n├── protocol/           # 线路格式协议实现（README.md）\n│   ├── traits.rs       # LLMProtocol trait\n│   ├── registry.rs     # 全局 ProtocolRegistry\n│   ├── common.rs       # 共享基础设施（URL 解析、payload hook、错误处理）\n│   ├── delegation.rs   # 代理提供商生成宏\n│   ├── openai_completions.rs  # OpenAI Chat Completions 协议\n│   ├── openai_responses.rs    # OpenAI Responses API 协议\n│   ├── anthropic.rs    # Anthropic Messages 协议\n│   └── google.rs       # Google GenAI + Vertex AI（双模式）\n├── provider/           # 服务商门面（README.md）\n│   ├── openai.rs       # OpenAI → protocol::openai_responses\n│   ├── anthropic.rs    # Anthropic → protocol::anthropic\n│   ├── google.rs       # Google → protocol::google\n│   ├── ollama.rs       # Ollama → protocol::openai_completions\n│   ├── xai.rs          # 代理 → OpenAI Completions\n│   ├── groq.rs         # 代理 → OpenAI Completions\n│   ├── openrouter.rs   # 代理 → OpenAI Completions\n│   ├── zai.rs          # 代理 → OpenAI Completions\n│   ├── minimax.rs      # 代理 → Anthropic\n│   ├── kimi_coding.rs  # 代理 → Anthropic\n│   ├── zenmux.rs       # 自适应三路路由\n│   └── opencode_go.rs # 自适应多协议路由\n├── catalog/\n│   ├── README.md       # Catalog 抓取/补全/快照文档\n│   └── mod.rs          # 原生模型列表 + 快照刷新 + metadata stores\n├── stream/\n│   └── event_stream.rs # 通用 EventStream\u003cT, R\u003e + AssistantMessageEventStream\n├── agent/\n│   ├── README.md      # Agent 模块完整文档\n│   ├── agent.rs        # Agent 循环：流式 → 工具 → 重新请求\n│   ├── state.rs        # 线程安全的 AgentState\n│   └── types.rs        # AgentConfig, AgentEvent, AgentTool, AgentHooks, ToolExecutor, ToolExecutionMode\n├── transform/\n│   ├── messages.rs     # 思维块转换、孤儿工具调用处理\n│   └── tool_calls.rs   # 工具调用 ID 规范化\n├── thinking/\n│   └── config.rs       # ThinkingLevel, 提供商特定选项\n├── validation/\n│   └── tool_validation.rs # 工具参数 JSON Schema 验证\n└── models/\n    ├── mod.rs           # ModelRegistry + 全局预定义模型\n    └── predefined.rs\n```\n\n## 许可证\n\n[MIT](https://opensource.org/licenses/MIT)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftiyagents%2Ftiycore","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftiyagents%2Ftiycore","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftiyagents%2Ftiycore/lists"}