{"id":30873395,"url":"https://github.com/cryxnet/deepmcpagent","last_synced_at":"2025-12-14T08:58:49.270Z","repository":{"id":312389946,"uuid":"1046952313","full_name":"cryxnet/DeepMCPAgent","owner":"cryxnet","description":"Model-agnostic plug-n-play LangChain/LangGraph agents powered entirely by MCP tools over HTTP/SSE.","archived":false,"fork":false,"pushed_at":"2025-10-18T13:02:42.000Z","size":210,"stargazers_count":631,"open_issues_count":0,"forks_count":94,"subscribers_count":5,"default_branch":"main","last_synced_at":"2025-10-18T15:26:51.035Z","etag":null,"topics":["agent-framework","agentic-ai","agents","ai","ai-agents","ai-framework","artificial-intelligence","autonomous-agents","deep-agents","developer-tools","langchain","langgraph","llm-agents","mcp","opensource-agents","python","react-agents"],"latest_commit_sha":null,"homepage":"https://cryxnet.github.io/DeepMCPAgent/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/cryxnet.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-08-29T13:49:43.000Z","updated_at":"2025-10-18T12:21:15.000Z","dependencies_parsed_at":null,"dependency_job_id":"1f27ffbd-20fa-4825-8a2a-0cd539e31586","html_url":"https://github.com/cryxnet/DeepMCPAgent","commit_stats":null,"previous_names":["cryxnet/deepmcpagent"],"tags_count":4,"template":false,"template_full_name":null,"purl":"pkg:github/cryxnet/DeepMCPAgent","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cryxnet%2FDeepMCPAgent","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cryxnet%2FDeepMCPAgent/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cryxnet%2FDeepMCPAgent/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cryxnet%2FDeepMCPAgent/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/cryxnet","download_url":"https://codeload.github.com/cryxnet/DeepMCPAgent/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cryxnet%2FDeepMCPAgent/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":27723993,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-12-14T02:00:11.348Z","response_time":56,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agent-framework","agentic-ai","agents","ai","ai-agents","ai-framework","artificial-intelligence","autonomous-agents","deep-agents","developer-tools","langchain","langgraph","llm-agents","mcp","opensource-agents","python","react-agents"],"created_at":"2025-09-07T23:01:21.441Z","updated_at":"2025-12-14T08:58:49.263Z","avatar_url":"https://github.com/cryxnet.png","language":"Python","readme":"\u003c!-- Banner / Title --\u003e\n\u003cdiv align=\"center\"\u003e\n  \u003cimg src=\"docs/images/icon.png\" width=\"120\" alt=\"DeepMCPAgent Logo\"/\u003e\n\n  \u003ch1\u003e🤖 DeepMCPAgent\u003c/h1\u003e\n  \u003cp\u003e\u003cstrong\u003eModel-agnostic LangChain/LangGraph agents powered entirely by \u003ca href=\"https://modelcontextprotocol.io/\"\u003eMCP\u003c/a\u003e tools over HTTP/SSE.\u003c/strong\u003e\u003c/p\u003e\n\n  \u003c!-- Badges --\u003e\n  \u003cp\u003e\n    \u003ca href=\"https://cryxnet.github.io/DeepMCPAgent\"\u003e\n      \u003cimg alt=\"Docs\" src=\"https://img.shields.io/badge/docs-latest-brightgreen.svg\"\u003e\n    \u003c/a\u003e\n    \u003ca href=\"#\"\u003e\u003cimg alt=\"Python\" src=\"https://img.shields.io/badge/Python-3.10%2B-blue.svg\"\u003e\u003c/a\u003e\n    \u003ca href=\"#\"\u003e\u003cimg alt=\"License\" src=\"https://img.shields.io/badge/License-Apache%202.0-blue.svg\"\u003e\u003c/a\u003e\n    \u003ca href=\"#\"\u003e\u003cimg alt=\"Status\" src=\"https://img.shields.io/badge/status-beta-orange.svg\"\u003e\u003c/a\u003e\n\n\u003cp\u003e\n  \u003ca href=\"https://www.producthunt.com/products/deep-mcp-agents?utm_source=badge-featured\u0026utm_medium=badge\u0026utm_source=badge-deep-mcp-agents\" target=\"_blank\"\u003e\n    \u003cimg src=\"https://api.producthunt.com/widgets/embed-image/v1/featured.svg?post_id=1011071\u0026theme=light\" alt=\"Deep MCP Agents on Product Hunt\" style=\"width: 250px; height: 54px;\" width=\"250\" height=\"54\" /\u003e\n  \u003c/a\u003e\n\u003c/p\u003e \n  \u003c/p\u003e\n\n  \u003cp\u003e\n    \u003cem\u003eDiscover MCP tools dynamically. Bring your own LangChain model. Build production-ready agents—fast.\u003c/em\u003e\n  \u003c/p\u003e\n\n  \u003cp\u003e\n    📚 \u003ca href=\"https://cryxnet.github.io/deepmcpagent/\"\u003eDocumentation\u003c/a\u003e • 🛠 \u003ca href=\"https://github.com/cryxnet/deepmcpagent/issues\"\u003eIssues\u003c/a\u003e\n  \u003c/p\u003e\n\u003c/div\u003e\n\n\u003chr/\u003e\n\n## ✨ Why DeepMCPAgent?\n\n- 🔌 **Zero manual tool wiring** — tools are discovered dynamically from MCP servers (HTTP/SSE)\n- 🌐 **External APIs welcome** — connect to remote MCP servers (with headers/auth)\n- 🧠 **Model-agnostic** — pass any LangChain chat model instance (OpenAI, Anthropic, Ollama, Groq, local, …)\n- ⚡ **DeepAgents (optional)** — if installed, you get a deep agent loop; otherwise robust LangGraph ReAct fallback\n- 🛠️ **Typed tool args** — JSON-Schema → Pydantic → LangChain `BaseTool` (typed, validated calls)\n- 🧪 **Quality bar** — mypy (strict), ruff, pytest, GitHub Actions, docs\n\n\u003e **MCP first.** Agents shouldn’t hardcode tools — they should **discover** and **call** them. DeepMCPAgent builds that bridge.\n\n---\n\n## 🚀 Installation\n\nInstall from [PyPI](https://pypi.org/project/deepmcpagent/):\n\n```bash\npip install \"deepmcpagent[deep]\"\n```\n\nThis installs DeepMCPAgent with **DeepAgents support (recommended)** for the best agent loop.\nOther optional extras:\n\n- `dev` → linting, typing, tests\n- `docs` → MkDocs + Material + mkdocstrings\n- `examples` → dependencies used by bundled examples\n\n```bash\n# install with deepagents + dev tooling\npip install \"deepmcpagent[deep,dev]\"\n```\n\n⚠️ If you’re using **zsh**, remember to quote extras:\n\n```bash\npip install \"deepmcpagent[deep,dev]\"\n```\n\n---\n\n## 🚀 Quickstart\n\n### 1) Start a sample MCP server (HTTP)\n\n```bash\npython examples/servers/math_server.py\n```\n\nThis serves an MCP endpoint at: **[http://127.0.0.1:8000/mcp](http://127.0.0.1:8000/mcp)**\n\n### 2) Run the example agent (with fancy console output)\n\n```bash\npython examples/use_agent.py\n```\n\n**What you’ll see:**\n\n![screenshot](/docs/images/screenshot_output.png)\n\n---\n\n## 🧑‍💻 Bring-Your-Own Model (BYOM)\n\nDeepMCPAgent lets you pass **any LangChain chat model instance** (or a provider id string if you prefer `init_chat_model`):\n\n```python\nimport asyncio\nfrom deepmcpagent import HTTPServerSpec, build_deep_agent\n\n# choose your model:\n# from langchain_openai import ChatOpenAI\n# model = ChatOpenAI(model=\"gpt-4.1\")\n\n# from langchain_anthropic import ChatAnthropic\n# model = ChatAnthropic(model=\"claude-3-5-sonnet-latest\")\n\n# from langchain_community.chat_models import ChatOllama\n# model = ChatOllama(model=\"llama3.1\")\n\nasync def main():\n    servers = {\n        \"math\": HTTPServerSpec(\n            url=\"http://127.0.0.1:8000/mcp\",\n            transport=\"http\",    # or \"sse\"\n            # headers={\"Authorization\": \"Bearer \u003ctoken\u003e\"},\n        ),\n    }\n\n    graph, _ = await build_deep_agent(\n        servers=servers,\n        model=model,\n        instructions=\"Use MCP tools precisely.\"\n    )\n\n    out = await graph.ainvoke({\"messages\":[{\"role\":\"user\",\"content\":\"add 21 and 21 with tools\"}]})\n    print(out)\n\nasyncio.run(main())\n```\n\n\u003e Tip: If you pass a **string** like `\"openai:gpt-4.1\"`, we’ll call LangChain’s `init_chat_model()` for you (and it will read env vars like `OPENAI_API_KEY`). Passing a **model instance** gives you full control.\n\n---\n\n## 🤝 Cross-Agent Communication\n\nDeepMCPAgent v0.5 introduces **Cross-Agent Communication** — agents that can _talk to each other_ without extra servers, message queues, or orchestration layers.\n\nYou can now attach one agent as a **peer** inside another, turning it into a callable tool.  \nEach peer appears automatically as `ask_agent_\u003cname\u003e` or can be reached via `broadcast_to_agents` for parallel reasoning across multiple agents.\n\nThis means your agents can **delegate**, **collaborate**, and **critique** each other — all through the same MCP tool interface.  \nIt’s lightweight, model-agnostic, and fully transparent: every peer call is traced like any other tool invocation.\n\n---\n\n### 💻 Example\n\n```python\nimport asyncio\nfrom deepmcpagent import HTTPServerSpec, build_deep_agent\nfrom deepmcpagent.cross_agent import CrossAgent\n\nasync def main():\n    # 1️⃣ Build a \"research\" peer agent\n    research_graph, _ = await build_deep_agent(\n        servers={\"web\": HTTPServerSpec(url=\"http://127.0.0.1:8000/mcp\")},\n        model=\"openai:gpt-4o-mini\",\n        instructions=\"You are a focused research assistant that finds and summarizes sources.\",\n    )\n\n    # 2️⃣ Build the main agent and attach the peer as a tool\n    main_graph, _ = await build_deep_agent(\n        servers={\"math\": HTTPServerSpec(url=\"http://127.0.0.1:9000/mcp\")},\n        model=\"openai:gpt-4.1\",\n        instructions=\"You are a lead analyst. Use peers when you need research or summarization.\",\n        cross_agents={\n            \"researcher\": CrossAgent(agent=research_graph, description=\"A web research peer.\")\n        },\n        trace_tools=True,  # see all tool calls + peer responses in console\n    )\n\n    # 3️⃣ Ask a question — the main agent can now call the researcher\n    result = await main_graph.ainvoke({\n        \"messages\": [{\"role\": \"user\", \"content\": \"Find recent research on AI ethics and summarize it.\"}]\n    })\n\n    print(result)\n\nasyncio.run(main())\n```\n\n🧩 **Result:**\nYour main agent automatically calls `ask_agent_researcher(...)` when it decides delegation makes sense, and the peer agent returns its best final answer — all transparently handled by the MCP layer.\n\n---\n\n### 💡 Use Cases\n\n- Researcher → Writer → Editor pipelines\n- Safety or reviewer peers that audit outputs\n- Retrieval or reasoning specialists\n- Multi-model ensembles combining small and large LLMs\n\nNo new infrastructure. No complex orchestration.\nJust **agents helping agents**, powered entirely by MCP over HTTP/SSE.\n\n\u003e 🧠 One framework, many minds — **DeepMCPAgent** turns individual LLMs into a cooperative system.\n\n---\n\n## 🖥️ CLI (no Python required)\n\n```bash\n# list tools from one or more HTTP servers\ndeepmcpagent list-tools \\\n  --http name=math url=http://127.0.0.1:8000/mcp transport=http \\\n  --model-id \"openai:gpt-4.1\"\n\n# interactive agent chat (HTTP/SSE servers only)\ndeepmcpagent run \\\n  --http name=math url=http://127.0.0.1:8000/mcp transport=http \\\n  --model-id \"openai:gpt-4.1\"\n```\n\n\u003e The CLI accepts **repeated** `--http` blocks; add `header.X=Y` pairs for auth:\n\u003e\n\u003e ```\n\u003e --http name=ext url=https://api.example.com/mcp transport=http header.Authorization=\"Bearer TOKEN\"\n\u003e ```\n\n---\n\n## Full Architecture \u0026 Agent Flow\n\n### 1) High-level Architecture (modules \u0026 data flow)\n\n```mermaid\nflowchart LR\n    %% Groupings\n    subgraph User[\"👤 User / App\"]\n      Q[\"Prompt / Task\"]\n      CLI[\"CLI (Typer)\"]\n      PY[\"Python API\"]\n    end\n\n    subgraph Agent[\"🤖 Agent Runtime\"]\n      DIR[\"build_deep_agent()\"]\n      PROMPT[\"prompt.py\\n(DEFAULT_SYSTEM_PROMPT)\"]\n      subgraph AGRT[\"Agent Graph\"]\n        DA[\"DeepAgents loop\\n(if installed)\"]\n        REACT[\"LangGraph ReAct\\n(fallback)\"]\n      end\n      LLM[\"LangChain Model\\n(instance or init_chat_model(provider-id))\"]\n      TOOLS[\"LangChain Tools\\n(BaseTool[])\"]\n    end\n\n    subgraph MCP[\"🧰 Tooling Layer (MCP)\"]\n      LOADER[\"MCPToolLoader\\n(JSON-Schema ➜ Pydantic ➜ BaseTool)\"]\n      TOOLWRAP[\"_FastMCPTool\\n(async _arun → client.call_tool)\"]\n    end\n\n    subgraph FMCP[\"🌐 FastMCP Client\"]\n      CFG[\"servers_to_mcp_config()\\n(mcpServers dict)\"]\n      MULTI[\"FastMCPMulti\\n(fastmcp.Client)\"]\n    end\n\n    subgraph SRV[\"🛠 MCP Servers (HTTP/SSE)\"]\n      S1[\"Server A\\n(e.g., math)\"]\n      S2[\"Server B\\n(e.g., search)\"]\n      S3[\"Server C\\n(e.g., github)\"]\n    end\n\n    %% Edges\n    Q --\u003e|query| CLI\n    Q --\u003e|query| PY\n    CLI --\u003e DIR\n    PY --\u003e DIR\n\n    DIR --\u003e PROMPT\n    DIR --\u003e LLM\n    DIR --\u003e LOADER\n    DIR --\u003e AGRT\n\n    LOADER --\u003e MULTI\n    CFG --\u003e MULTI\n    MULTI --\u003e|list_tools| SRV\n    LOADER --\u003e TOOLS\n    TOOLS --\u003e AGRT\n\n    AGRT \u003c--\u003e|messages| LLM\n    AGRT --\u003e|tool calls| TOOLWRAP\n    TOOLWRAP --\u003e MULTI\n    MULTI --\u003e|call_tool| SRV\n\n    SRV --\u003e|tool result| MULTI --\u003e TOOLWRAP --\u003e AGRT --\u003e|final answer| CLI\n    AGRT --\u003e|final answer| PY\n```\n\n---\n\n### 2) Runtime Sequence (end-to-end tool call)\n\n```mermaid\nsequenceDiagram\n    autonumber\n    participant U as User\n    participant CLI as CLI/Python\n    participant Builder as build_deep_agent()\n    participant Loader as MCPToolLoader\n    participant Graph as Agent Graph (DeepAgents or ReAct)\n    participant LLM as LangChain Model\n    participant Tool as _FastMCPTool\n    participant FMCP as FastMCP Client\n    participant S as MCP Server (HTTP/SSE)\n\n    U-\u003e\u003eCLI: Enter prompt\n    CLI-\u003e\u003eBuilder: build_deep_agent(servers, model, instructions?)\n    Builder-\u003e\u003eLoader: get_all_tools()\n    Loader-\u003e\u003eFMCP: list_tools()\n    FMCP-\u003e\u003eS: HTTP(S)/SSE list_tools\n    S--\u003e\u003eFMCP: tools + JSON-Schema\n    FMCP--\u003e\u003eLoader: tool specs\n    Loader--\u003e\u003eBuilder: BaseTool[]\n    Builder--\u003e\u003eCLI: (Graph, Loader)\n\n    U-\u003e\u003eGraph: ainvoke({messages:[user prompt]})\n    Graph-\u003e\u003eLLM: Reason over system + messages + tool descriptions\n    LLM--\u003e\u003eGraph: Tool call (e.g., add(a=3,b=5))\n    Graph-\u003e\u003eTool: _arun(a=3,b=5)\n    Tool-\u003e\u003eFMCP: call_tool(\"add\", {a:3,b:5})\n    FMCP-\u003e\u003eS: POST /mcp tools.call(\"add\", {...})\n    S--\u003e\u003eFMCP: result { data: 8 }\n    FMCP--\u003e\u003eTool: result\n    Tool--\u003e\u003eGraph: ToolMessage(content=8)\n\n    Graph-\u003e\u003eLLM: Continue with observations\n    LLM--\u003e\u003eGraph: Final response \"(3 + 5) * 7 = 56\"\n    Graph--\u003e\u003eCLI: messages (incl. final LLM answer)\n```\n\n---\n\n### 3) Agent Control Loop (planning \u0026 acting)\n\n```mermaid\nstateDiagram-v2\n    [*] --\u003e AcquireTools\n    AcquireTools: Discover MCP tools via FastMCP\\n(JSON-Schema ➜ Pydantic ➜ BaseTool)\n    AcquireTools --\u003e Plan\n\n    Plan: LLM plans next step\\n(uses system prompt + tool descriptions)\n    Plan --\u003e CallTool: if tool needed\n    Plan --\u003e Respond: if direct answer sufficient\n\n    CallTool: _FastMCPTool._arun\\n→ client.call_tool(name, args)\n    CallTool --\u003e Observe: receive tool result\n    Observe: Parse result payload (data/text/content)\n    Observe --\u003e Decide\n\n    Decide: More tools needed?\n    Decide --\u003e Plan: yes\n    Decide --\u003e Respond: no\n\n    Respond: LLM crafts final message\n    Respond --\u003e [*]\n```\n\n---\n\n### 4) Code Structure (types \u0026 relationships)\n\n```mermaid\nclassDiagram\n    class StdioServerSpec {\n      +command: str\n      +args: List[str]\n      +env: Dict[str,str]\n      +cwd: Optional[str]\n      +keep_alive: bool\n    }\n\n    class HTTPServerSpec {\n      +url: str\n      +transport: Literal[\"http\",\"streamable-http\",\"sse\"]\n      +headers: Dict[str,str]\n      +auth: Optional[str]\n    }\n\n    class FastMCPMulti {\n      -_client: fastmcp.Client\n      +client(): Client\n    }\n\n    class MCPToolLoader {\n      -_multi: FastMCPMulti\n      +get_all_tools(): List[BaseTool]\n      +list_tool_info(): List[ToolInfo]\n    }\n\n    class _FastMCPTool {\n      +name: str\n      +description: str\n      +args_schema: Type[BaseModel]\n      -_tool_name: str\n      -_client: Any\n      +_arun(**kwargs) async\n    }\n\n    class ToolInfo {\n      +server_guess: str\n      +name: str\n      +description: str\n      +input_schema: Dict[str,Any]\n    }\n\n    class build_deep_agent {\n      +servers: Mapping[str,ServerSpec]\n      +model: ModelLike\n      +instructions?: str\n      +returns: (graph, loader)\n    }\n\n    StdioServerSpec \u003c|-- ServerSpec\n    HTTPServerSpec \u003c|-- ServerSpec\n    FastMCPMulti o--\u003e ServerSpec : uses servers_to_mcp_config()\n    MCPToolLoader o--\u003e FastMCPMulti\n    MCPToolLoader --\u003e _FastMCPTool : creates\n    _FastMCPTool ..\u003e BaseTool\n    build_deep_agent --\u003e MCPToolLoader : discovery\n    build_deep_agent --\u003e _FastMCPTool : tools for agent\n```\n\n---\n\n\u003e These diagrams reflect the current implementation:\n\u003e\n\u003e - **Model is required** (string provider-id or LangChain model instance).\n\u003e - **MCP tools only**, discovered at runtime via **FastMCP** (HTTP/SSE).\n\u003e - Agent loop prefers **DeepAgents** if installed; otherwise **LangGraph ReAct**.\n\u003e - Tools are typed via **JSON-Schema ➜ Pydantic ➜ LangChain BaseTool**.\n\u003e - Fancy console output shows **discovered tools**, **calls**, **results**, and **final answer**.\n\n---\n\n## 🧪 Development\n\n```bash\n# install dev tooling\npip install -e \".[dev]\"\n\n# lint \u0026 type-check\nruff check .\nmypy\n\n# run tests\npytest -q\n```\n\n---\n\n## 🛡️ Security \u0026 Privacy\n\n- **Your keys, your model** — we don’t enforce a provider; pass any LangChain model.\n- Use **HTTP headers** in `HTTPServerSpec` to deliver bearer/OAuth tokens to servers.\n\n---\n\n## 🧯 Troubleshooting\n\n- **PEP 668: externally managed environment (macOS + Homebrew)**\n  Use a virtualenv:\n\n  ```bash\n  python3 -m venv .venv\n  source .venv/bin/activate\n  ```\n\n- **404 Not Found when connecting**\n  Ensure your server uses a path (e.g., `/mcp`) and your client URL includes it.\n- **Tool calls failing / attribute errors**\n  Ensure you’re on the latest version; our tool wrapper uses `PrivateAttr` for client state.\n- **High token counts**\n  That’s normal with tool-calling models. Use smaller models for dev.\n\n---\n\n## 📄 License\n\nApache-2.0 — see [`LICENSE`](/LICENSE).\n\n---\n\n## ⭐ Stars\n\n\u003cpicture\u003e\n  \u003csource\n    media=\"(prefers-color-scheme: dark)\"\n    srcset=\"\n      https://api.star-history.com/svg?repos=cryxnet/DeepMCPAgent\u0026type=Date\u0026theme=dark\n    \"\n  /\u003e\n  \u003csource\n    media=\"(prefers-color-scheme: light)\"\n    srcset=\"\n      https://api.star-history.com/svg?repos=cryxnet/DeepMCPAgent\u0026type=Date\n    \"\n  /\u003e\n  \u003cimg\n    alt=\"Star History Chart\"\n    src=\"https://api.star-history.com/svg?repos=cryxnet/DeepMCPAgent\u0026type=Date\"\n  /\u003e\n\u003c/picture\u003e\n\n## 🙏 Acknowledgments\n\n- The [**MCP** community](https://modelcontextprotocol.io/) for a clean protocol.\n- [**LangChain**](https://www.langchain.com/) and [**LangGraph**](https://www.langchain.com/langgraph) for powerful agent runtimes.\n- [**FastMCP**](https://gofastmcp.com/getting-started/welcome) for solid client \u0026 server implementations.\n\n```\n\n```\n","funding_links":[],"categories":["🛠️ Developer Tools"],"sub_categories":["🟩 Development Tools 🛠️"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcryxnet%2Fdeepmcpagent","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcryxnet%2Fdeepmcpagent","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcryxnet%2Fdeepmcpagent/lists"}