{"id":28505822,"url":"https://github.com/opencsgs/coagent","last_synced_at":"2026-01-16T09:20:45.968Z","repository":{"id":269651393,"uuid":"908046666","full_name":"OpenCSGs/coagent","owner":"OpenCSGs","description":"An open-source framework for building monolithic or distributed agentic systems, ranging from simple LLM calls to compositional workflows and autonomous agents.","archived":false,"fork":false,"pushed_at":"2026-01-14T03:55:00.000Z","size":1723,"stargazers_count":24,"open_issues_count":0,"forks_count":4,"subscribers_count":3,"default_branch":"main","last_synced_at":"2026-01-14T07:30:36.918Z","etag":null,"topics":["agent","agent-framework","agentic-systems","ai","ai-agents"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"agpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/OpenCSGs.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2024-12-25T01:18:56.000Z","updated_at":"2026-01-14T03:53:20.000Z","dependencies_parsed_at":"2024-12-25T07:28:41.551Z","dependency_job_id":"a849c83f-3e90-4d2e-8d24-5fcbc8e18065","html_url":"https://github.com/OpenCSGs/coagent","commit_stats":null,"previous_names":["opencsgs/coagent"],"tags_count":12,"template":false,"template_full_name":null,"purl":"pkg:github/OpenCSGs/coagent","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenCSGs%2Fcoagent","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenCSGs%2Fcoagent/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenCSGs%2Fcoagent/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenCSGs%2Fcoagent/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/OpenCSGs","download_url":"https://codeload.github.com/OpenCSGs/coagent/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenCSGs%2Fcoagent/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28478049,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-16T06:30:42.265Z","status":"ssl_error","status_checked_at":"2026-01-16T06:30:16.248Z","response_time":107,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agent","agent-framework","agentic-systems","ai","ai-agents"],"created_at":"2025-06-08T19:30:59.406Z","updated_at":"2026-01-16T09:20:45.956Z","avatar_url":"https://github.com/OpenCSGs.png","language":"Python","readme":"# Coagent\n[![CI](https://github.com/OpenCSGs/coagent/actions/workflows/ci.yml/badge.svg)](https://github.com/OpenCSGs/coagent/actions?query=event%3Apush+branch%3Amain+workflow%3ACI)\n\nAn open-source framework for building monolithic or distributed agentic systems, ranging from simple LLM calls to compositional workflows and autonomous agents.\n\n\n\u003cp align=\"center\"\u003e\n\u003cimg src=\"assets/coagent-overview.png\" height=\"600\"\u003e\n\u003c/p\u003e\n\n\n## Latest Updates\n\n- 🚀 **2025-10-24**: Added support for [ReAct](https://arxiv.org/abs/2210.03629) agents, check out [Autonomous Agents](#autonomous-agents).\n- **2025-07-18**: Added support for [A2A](https://a2a-protocol.org/), check out the [example](examples/a2a).\n- **2025-02-08**: Added support for [DeepSeek-R1](https://api-docs.deepseek.com/news/news250120), check out the [example](examples/deepseek-r1).\n- **2025-01-28**: Added support for [Structured Outputs][2].\n- **2025-01-22**: Added support for [Model Context Protocol][3].\n- **2025-01-17**: Added integration with [LiteLLM](https://github.com/BerriAI/litellm).\n\n\n## Features\n\n- [x] Event-driven \u0026 Scalable on-demand\n- [x] Monolithic or Distributed\n    - [x] Local Runtime (In-process Runtime)\n    - [x] HTTP Runtime (HTTP-based Distributed Runtime)\n    - [x] NATS Runtime (NATS-based Distributed Runtime)\n        - [ ] Using NATS [JetStream][1]\n- [x] Single-agent\n    - [x] [Function calling](https://platform.openai.com/docs/guides/function-calling)\n    - [x] [Structured Outputs][2] ([example](examples/structured-outputs))\n    - [x] ReAct agents ([example](examples/patterns/autonomous_agent.py))\n- [x] Multi-agent orchestration\n    - [x] Agent Discovery\n    - [x] Static orchestration\n        - [x] Sequential\n        - [x] Parallel\n    - [x] Dynamic orchestration\n        - [x] Triage\n        - [x] Handoffs (based on async Swarm)\n- [x] Support any LLM\n- [x] Support [Model Context Protocol][3] ([example](examples/mcp))\n- [x] [CoS](coagent/cos) (Multi-language support)\n    - [x] [Python](examples/cos/cos.py)\n    - [x] [Node.js](examples/cos/cos.js)\n    - [x] [Go](examples/cos/goagent)\n    - [ ] Zig\n    - [ ] Rust\n\n\n## Three-tier Architecture\n\n\u003cp align=\"center\"\u003e\n\u003cimg src=\"assets/coagent-three-tier-architecture.png\" height=\"500\"\u003e\n\u003c/p\u003e\n\n\n## Installation\n\n```bash\npip install coagent-python\n```\n\nTo install with A2A support:\n\n```bash\npip install \"coagent-python[a2a]\"\n```\n\n\n## Quick Start\n\n\n### Monolithic\n\nImplement the agent:\n\n```python\n# translator.py\n\nimport asyncio\nimport os\n\nfrom coagent.agents import ChatAgent, ChatMessage, Model\nfrom coagent.core import AgentSpec, new, init_logger\nfrom coagent.runtimes import LocalRuntime\n\ntranslator = AgentSpec(\n    \"translator\",\n    new(\n        ChatAgent,\n        system=\"You are a professional translator that can translate Chinese to English.\",\n        model=Model(id=\"openai/gpt-4o\", api_key=os.getenv(\"OPENAI_API_KEY\")),\n    ),\n)\n\n\nasync def main():\n    async with LocalRuntime() as runtime:\n        await runtime.register(translator)\n\n        result = await translator.run(\n            ChatMessage(role=\"user\", content=\"你好，世界\").encode(),\n            stream=True,\n        )\n        async for chunk in result:\n            msg = ChatMessage.decode(chunk)\n            print(msg.content, end=\"\", flush=True)\n\n\nif __name__ == \"__main__\":\n    init_logger()\n    asyncio.run(main())\n```\n\nRun the agent:\n\n```bash\nexport OPENAI_API_KEY=\"your-openai-key\"\npython translator.py\n```\n\n\n### Distributed\n\nStart a NATS server ([docs][4]):\n\n```bash\ndocker run -p 4222:4222 --name nats-server -ti nats:latest\n```\n\nImplement the agent:\n\n```python\n# translator.py\n\nimport asyncio\nimport os\n\nfrom coagent.agents import ChatAgent, Model\nfrom coagent.core import AgentSpec, new, init_logger\nfrom coagent.runtimes import NATSRuntime\n\ntranslator = AgentSpec(\n    \"translator\",\n    new(\n        ChatAgent,\n        system=\"You are a professional translator that can translate Chinese to English.\",\n        model=Model(id=\"openai/gpt-4o\", api_key=os.getenv(\"OPENAI_API_KEY\")),\n    ),\n)\n\n\nasync def main():\n    async with NATSRuntime.from_servers(\"nats://localhost:4222\") as runtime:\n        await runtime.register(translator)\n        await runtime.wait_for_shutdown()\n\n\nif __name__ == \"__main__\":\n    init_logger()\n    asyncio.run(main())\n```\n\nRun the agent as a daemon:\n\n```bash\nexport OPENAI_API_KEY=\"your-openai-key\"\npython translator.py\n```\n\nCommunicate with the agent using the `coagent` CLI:\n\n```bash\ncoagent translator -H type:ChatMessage --chat -d '{\"role\": \"user\", \"content\": \"你好，世界\"}'\n```\n\n\n## Patterns\n\n(The following patterns are mainly inspired by [Anthropic's Building effective agents][5] and [OpenAI's Handoffs][6].)\n\n### Basic: Augmented LLM\n\n**Augmented LLM** is an LLM enhanced with augmentations such as retrieval, tools, and memory. Our current models can actively use these capabilities—generating their own search queries, selecting appropriate tools, and determining what information to retain.\n\n```mermaid\nflowchart LR\n    In([In]) --\u003e ALLM[LLM] --\u003e Out([Out])\n\n    subgraph ALLM[LLM]\n        LLM[LLM]\n        Retrieval[Retrieval]\n        Tools[Tools]\n        Memory[Memory]\n    end\n\n    LLM \u003c-.-\u003e Retrieval\n    LLM \u003c-.-\u003e Tools\n    LLM \u003c-.-\u003e Memory\n\n    style In fill:#ffb3ba,stroke-width:0px\n    style Out fill:#ffb3ba,stroke-width:0px\n    style LLM fill:#baffc9,stroke-width:0px\n    style Retrieval fill:#ccccff,stroke-width:0px\n    style Tools fill:#ccccff,stroke-width:0px\n    style Memory fill:#ccccff,stroke-width:0px\n    style ALLM fill:#fff,stroke:#000,stroke-width:1px,stroke-dasharray: 2 2\n```\n\n**Example** (see [examples/patterns/augmented_llm.py](examples/patterns/augmented_llm.py) for a runnable example):\n\n```python\nfrom coagent.agents import ChatAgent, Model, tool\nfrom coagent.core import AgentSpec, new\n\n\nclass Assistant(ChatAgent):\n    system = \"\"\"You are an agent who can use tools.\"\"\"\n    model = Model(...)\n\n    @tool\n    async def query_weather(self, city: str) -\u003e str:\n        \"\"\"Query the weather in the given city.\"\"\"\n        return f\"The weather in {city} is sunny.\"\n\n\nassistant = AgentSpec(\"assistant\", new(Assistant))\n```\n\n### Workflow: Chaining\n\n**Chaining** decomposes a task into a sequence of steps, where each agent processes the output of the previous one.\n\n```mermaid\nflowchart LR\n    In([In]) --\u003e Agent1[Agent 1]\n    Agent1 --\u003e |Out1| Agent2[Agent 2]\n    Agent2 --\u003e |Out2| Agent3[Agent 3]\n    Agent3 --\u003e Out([Out])\n\n    style In fill:#ffb3ba,stroke-width:0px\n    style Out fill:#ffb3ba,stroke-width:0px\n    style Agent1 fill:#baffc9,stroke-width:0px\n    style Agent2 fill:#baffc9,stroke-width:0px\n    style Agent3 fill:#baffc9,stroke-width:0px\n```\n\n**When to use this workflow:** This workflow is ideal for situations where the task can be easily and cleanly decomposed into fixed subtasks. The main goal is to trade off latency for higher accuracy, by making each agent an easier task.\n\n**Example** (see [examples/patterns/chaining.py](examples/patterns/chaining.py) for a runnable example):\n\n```python\nfrom coagent.agents import ChatAgent, Sequential, Model\nfrom coagent.core import AgentSpec, new\n\nmodel = Model(...)\n\nextractor = AgentSpec(\n    \"extractor\",\n    new(\n        ChatAgent,\n        system=\"\"\"\\\nExtract only the numerical values and their associated metrics from the text.\nFormat each as 'value: metric' on a new line.\nExample format:\n92: customer satisfaction\n45%: revenue growth\\\n\"\"\",\n        model=model,\n    ),\n)\n\nconverter = AgentSpec(\n    \"converter\",\n    new(\n        ChatAgent,\n        system=\"\"\"\\\nConvert all numerical values to percentages where possible.\nIf not a percentage or points, convert to decimal (e.g., 92 points -\u003e 92%).\nKeep one number per line.\nExample format:\n92%: customer satisfaction\n45%: revenue growth\\\n\"\"\",\n        model=model,\n    ),\n)\n\nsorter = AgentSpec(\n    \"sorter\",\n    new(\n        ChatAgent,\n        system=\"\"\"\\\nSort all lines in descending order by numerical value.\nKeep the format 'value: metric' on each line.\nExample:\n92%: customer satisfaction\n87%: employee satisfaction\\\n\"\"\",\n        model=model,\n    ),\n)\n\nformatter = AgentSpec(\n    \"formatter\",\n    new(\n        ChatAgent,\n        system=\"\"\"\\\nFormat the sorted data as a markdown table with columns:\n| Metric | Value |\n|:--|--:|\n| Customer Satisfaction | 92% |\\\n\"\"\",\n        model=model,\n    ),\n)\n\nchain = AgentSpec(\n    \"chain\", new(Sequential, \"extractor\", \"converter\", \"sorter\", \"formatter\")\n)\n```\n\n### Workflow: Parallelization\n\n**Parallelization** distributes independent subtasks across multiple agents for concurrent processing.\n\n```mermaid\nflowchart LR\n    In([In]) --\u003e Agent1[Agent 1]\n    In --\u003e Agent2[Agent 2]\n    In --\u003e Agent3[Agent 3]\n\n    Agent1 --\u003e Aggregator[Aggregator]\n    Agent2 --\u003e Aggregator\n    Agent3 --\u003e Aggregator\n\n    Aggregator --\u003e Out([Out])\n\n    style In fill:#ffb3ba,stroke-width:0px\n    style Out fill:#ffb3ba,stroke-width:0px\n    style Agent1 fill:#baffc9,stroke-width:0px\n    style Agent2 fill:#baffc9,stroke-width:0px\n    style Agent3 fill:#baffc9,stroke-width:0px\n    style Aggregator fill:#ccccff,stroke-width:0px\n```\n\n**When to use this workflow:** Parallelization is effective when the divided subtasks can be parallelized for speed, or when multiple perspectives or attempts are needed for higher confidence results.\n\n**Example** (see [examples/patterns/parallelization.py](examples/patterns/parallelization.py) for a runnable example):\n\n```python\nfrom coagent.agents import Aggregator, ChatAgent, Model, Parallel\nfrom coagent.core import AgentSpec, new\n\nmodel = Model(...)\n\ncustomer = AgentSpec(\n    \"customer\",\n    new(\n        ChatAgent,\n        system=\"\"\"\\\nCustomers:\n- Price sensitive\n- Want better tech\n- Environmental concerns\\\n\"\"\",\n        model=model,\n    ),\n)\n\nemployee = AgentSpec(\n    \"employee\",\n    new(\n        ChatAgent,\n        system=\"\"\"\\\nEmployees:\n- Job security worries\n- Need new skills\n- Want clear direction\\\n\"\"\",\n        model=model,\n    ),\n)\n\ninvestor = AgentSpec(\n    \"investor\",\n    new(\n        ChatAgent,\n        system=\"\"\"\\\nInvestors:\n- Expect growth\n- Want cost control\n- Risk concerns\\\n\"\"\",\n        model=model,\n    ),\n)\n\nsupplier = AgentSpec(\n    \"supplier\",\n    new(\n        ChatAgent,\n        system=\"\"\"\\\nSuppliers:\n- Capacity constraints\n- Price pressures\n- Tech transitions\\\n\"\"\",\n        model=model,\n    ),\n)\n\naggregator = AgentSpec(\"aggregator\", new(Aggregator))\n\nparallel = AgentSpec(\n    \"parallel\",\n    new(\n        Parallel,\n        \"customer\",\n        \"employee\",\n        \"investor\",\n        \"supplier\",\n        aggregator=\"aggregator\",\n    ),\n)\n```\n\n\n### Workflow: Triaging \u0026 Routing\n\n**Triaging** classifies an input and directs it to a specialized followup agent. This workflow allows for separation of concerns, and building more specialized agents.\n\n```mermaid\nflowchart LR\n    In([In]) --\u003e Triage[Triage]\n    Triage --\u003e Agent1[Agent 1]\n    Triage -.-\u003e Agent2[Agent 2]\n    Triage -.-\u003e Agent3[Agent 3]\n    Agent1 --\u003e Out([Out])\n    Agent2 -.-\u003e Out\n    Agent3 -.-\u003e Out\n\n    style In fill:#ffb3ba,stroke-width:0px\n    style Out fill:#ffb3ba,stroke-width:0px\n    style Triage fill:#baffc9,stroke-width:0px\n    style Agent1 fill:#baffc9,stroke-width:0px\n    style Agent2 fill:#baffc9,stroke-width:0px\n    style Agent3 fill:#baffc9,stroke-width:0px\n```\n\n**When to use this workflow:** This workflow works well for complex tasks where there are distinct categories that are better handled separately, and where classification can be handled accurately, either by an LLM (using Prompting or Function-calling) or a more traditional classification model/algorithm.\n\n**Example** (see [examples/patterns/triaging.py](examples/patterns/triaging.py) for a runnable example):\n\n```python\nfrom coagent.agents import ChatAgent, Triage, Model\nfrom coagent.core import AgentSpec, new\n\nmodel = Model(...)\n\nbilling = AgentSpec(\n    \"billing\",\n    new(\n        ChatAgent,\n        system=\"\"\"\\\nYou are a billing support specialist. Follow these guidelines:\n1. Always start with \"Billing Support Response:\"\n2. First acknowledge the specific billing issue\n3. Explain any charges or discrepancies clearly\n4. List concrete next steps with timeline\n5. End with payment options if relevant\n\nKeep responses professional but friendly.\\\n\"\"\",\n        model=model,\n    ),\n)\n\naccount = AgentSpec(\n    \"account\",\n    new(\n        ChatAgent,\n        system=\"\"\"\\\nYou are an account security specialist. Follow these guidelines:\n1. Always start with \"Account Support Response:\"\n2. Prioritize account security and verification\n3. Provide clear steps for account recovery/changes\n4. Include security tips and warnings\n5. Set clear expectations for resolution time\n\nMaintain a serious, security-focused tone.\\\n\"\"\",\n        model=model,\n    ),\n)\n\ntriage = AgentSpec(\n    \"triage\",\n    new(\n        Triage,\n        system=\"\"\"You are a triage agent who will delegate to sub-agents based on the conversation content.\"\"\",\n        model=model,\n        static_agents=[\"billing\", \"account\"],\n    ),\n)\n```\n\n\n### Autonomous Agents\n\n**Agents** are emerging in production as LLMs mature in key capabilities—understanding complex inputs, engaging in reasoning and planning, using tools reliably, and recovering from errors.\n\n```mermaid\nflowchart LR\n    H([Human]) \u003c-.-\u003e TA[Agent] -.-\u003e S([Stop])\n\n    subgraph TA[Agent]\n        A[Agent]\n        A --\u003e |Action| E([Environment])\n        E --\u003e |Feedback| A\n    end\n\n    style H fill:#ffb3ba,stroke-width:0px;\n    style A fill:#baffc9,stroke-width:0px;\n    style E fill:#ffb3ba,stroke-width:0px;\n    style TA fill:#fff,stroke:#000,stroke-width:1px,stroke-dasharray: 2 2\n```\n\n**When to use agents:** Agents can be used for open-ended problems where it’s difficult or impossible to predict the required number of steps, and where you can’t hardcode a fixed path. The Agent will potentially operate for many turns, and you must have some level of trust in its decision-making. Agents' autonomy makes them ideal for scaling tasks in trusted environments.\n\n**Example** (see [examples/patterns/autonomous_agent.py](examples/patterns/autonomous_agent.py) for a runnable example):\n\n```python\nfrom coagent.agents import Model\nfrom coagent.agents.react_agent import ReActAgent, RunContext\nfrom coagent.core import AgentSpec, new\n\nasync def get_current_city(ctx: RunContext) -\u003e str:\n    \"\"\"Get the current city.\"\"\"\n    ctx.report_progress(message=\"Getting the current city...\")\n    return \"Beijing\"\n\n\nasync def query_weather(ctx: RunContext, city: str) -\u003e str:\n    \"\"\"Query the weather in the given city.\"\"\"\n    ctx.report_progress(message=f\"Querying the weather in {city}...\")\n    return f\"The weather in {city} is sunny.\"\n\n\nreporter = AgentSpec(\n    \"reporter\",\n    new(\n        ReActAgent,\n        name=\"reporter\",\n        system=\"You are a helpful weather reporter\",\n        model=Model(...),\n        tools=[get_current_city, query_weather],\n    ),\n)\n```\n\n\n## Examples\n\n- [patterns](examples/patterns)\n- [agents-as-tools](examples/agents-as-tools)\n- [react-mcp](examples/react-mcp)\n- [react-vision](examples/react-vision)\n- [a2a](examples/a2a)\n- [mcp](examples/mcp)\n- [mcp-new](examples/mcp-new)\n- [structured-outputs](examples/structured-outputs)\n- [deepseek-r1](examples/deepseek-r1)\n- [translator](examples/translator)\n- [discovery](examples/discovery)\n- [notification](examples/notification)\n- [app-builder](examples/app-builder)\n- [opencsg](examples/opencsg)\n- [framework-integration](examples/framework-integration)\n- [ping-pong](examples/ping-pong)\n- [stream-ping-pong](examples/stream-ping-pong)\n- [cos](examples/cos)\n\n\n[1]: https://docs.nats.io/nats-concepts/jetstream\n[2]: https://platform.openai.com/docs/guides/structured-outputs\n[3]: https://modelcontextprotocol.io/introduction\n[4]: https://docs.nats.io/running-a-nats-service/nats_docker/nats-docker-tutorial\n[5]: https://www.anthropic.com/research/building-effective-agents\n[6]: https://cookbook.openai.com/examples/orchestrating_agents\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopencsgs%2Fcoagent","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fopencsgs%2Fcoagent","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopencsgs%2Fcoagent/lists"}