{"id":32789761,"url":"https://github.com/runedgeai/agents-cpp-sdk","last_synced_at":"2026-01-27T03:00:55.166Z","repository":{"id":318412564,"uuid":"1069831464","full_name":"RunEdgeAI/agents-cpp-sdk","owner":"RunEdgeAI","description":"A high performance C++ SDK for AI Agents ","archived":false,"fork":false,"pushed_at":"2026-01-15T02:13:42.000Z","size":54133,"stargazers_count":27,"open_issues_count":1,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-01-15T08:39:08.841Z","etag":null,"topics":["agentic-ai","agents","agents-sdk","ai-agents","ai-agents-framework","anthropic","artificial-intelligence","bazel","cpp","edge-ai","edge-ai-agents","gemini","generative-ai","llama-cpp","llm","local-ai","ollama","openai","sdk"],"latest_commit_sha":null,"homepage":"https://www.runedge.ai/","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/RunEdgeAI.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":".github/FUNDING.yml","license":"LICENSE.md","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null},"funding":{"github":"amikhail48","patreon":null,"open_collective":null,"ko_fi":null,"tidelift":null,"community_bridge":null,"liberapay":null,"issuehunt":null,"lfx_crowdfunding":null,"polar":null,"buy_me_a_coffee":"amikhail","thanks_dev":null,"custom":null}},"created_at":"2025-10-04T17:53:42.000Z","updated_at":"2026-01-15T02:13:45.000Z","dependencies_parsed_at":"2025-10-07T04:21:51.665Z","dependency_job_id":null,"html_url":"https://github.com/RunEdgeAI/agents-cpp-sdk","commit_stats":null,"previous_names":["runedgeai/agents-sdk"],"tags_count":5,"template":false,"template_full_name":null,"purl":"pkg:github/RunEdgeAI/agents-cpp-sdk","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/RunEdgeAI%2Fagents-cpp-sdk","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/RunEdgeAI%2Fagents-cpp-sdk/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/RunEdgeAI%2Fagents-cpp-sdk/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/RunEdgeAI%2Fagents-cpp-sdk/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/RunEdgeAI","download_url":"https://codeload.github.com/RunEdgeAI/agents-cpp-sdk/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/RunEdgeAI%2Fagents-cpp-sdk/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28798587,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-27T01:07:07.743Z","status":"online","status_checked_at":"2026-01-27T02:00:07.755Z","response_time":168,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agentic-ai","agents","agents-sdk","ai-agents","ai-agents-framework","anthropic","artificial-intelligence","bazel","cpp","edge-ai","edge-ai-agents","gemini","generative-ai","llama-cpp","llm","local-ai","ollama","openai","sdk"],"created_at":"2025-11-05T11:01:09.418Z","updated_at":"2026-01-27T03:00:55.108Z","avatar_url":"https://github.com/RunEdgeAI.png","language":"C++","readme":"[TOC]\n# Agents-SDK - A High Performance C++ Framework for AI Agents\n\n![Linux](https://img.shields.io/badge/Linux-FCC624?logo=linux\u0026logoColor=black)\n![macOS](https://img.shields.io/badge/macOS-000000?logo=apple\u0026logoColor=F0F0F0)\n![Windows](https://custom-icon-badges.demolab.com/badge/Windows-0078D6?logo=windows11\u0026logoColor=white)\n\n**Agents-SDK** is a **portable, high-performance C++ framework** for building **on-device, agentic AI systems** — think **LangChain for the edge**. This SDK is purpose-built for developers who want to create **local-first AI agents** that can reason, plan, and act without relying on the cloud.\n\n## Features\n\n- **Modular Architecture** — Compose agents from interchangeable components.\n- **Multi-Provider Support** — Connect to multiple LLM providers seamlessly:\n  - **OpenAI** (GPT-5, GPT-4o, GPT-4)\n  - **Anthropic** (Claude 3 family models: Opus, Sonnet, Haiku)\n  - **Google** (Gemini family models: Pro, Flash)\n  - **Ollama/llama-cpp** (local models like Llama, Mistral, etc.)\n- **Optimized for Speed and Memory** — Built in C++ with focus on performance.\n- **Built-In Workflow Patterns**\n  - Prompt Chaining\n  - Routing\n  - Parallelization\n  - Orchestrator-Workers\n  - Evaluator-Optimizer\n- **Autonomous Agents** — Supports modern reasoning strategies:\n  - ReAct (Reason + Act)\n  - CoT (Chain-of-Thought)  [In Development]\n  - Plan and Execute\n  - Zero-Shot [In Development]\n  - Reflexion [In Development]\n- **Extensible Tooling System** — Plug in your own tools or use built-in ones (Web Search, Wikipedia, Python Executor, etc).\n\n## Requirements\n\n- C++20 compatible compiler (GCC 14+, Clang 17+, MSVC 2022+)\n- Bazel 8.3.1+\n\n- Dependencies (already provided for convenience)\n   - python3 (3.11+)\n   - nlohmann/json\n   - spdlog\n\n## Quick Start\n\n### Installation\n\n1. Clone the repository:\n   ```bash\n   git clone https://github.com/RunEdgeAI/agents-sdk.git\n   ```\n\n2. Navigate to SDK:\n   ```bash\n   cd agents-sdk\n   ```\n\n3. Obtain API keys:\n   - For OpenAI models: Get an API key from [OpenAI's platform](https://platform.openai.com/api-keys)\n   - For Anthropic models: Get an API key from [Anthropic's console](https://console.anthropic.com/account/keys)\n   - For Google models: Get an API key from [Google AI Studio](https://ai.google.dev/)\n   - For Websearch tool: Get an API key from [serpapi](https://serpapi.com)\n\n### Building\n\nBuild everything in this space:\n\n```bash\nbazel build ...\n```\n\n### Configuration\n\nYou can configure API keys and other settings in three ways:\n\n1. Using a `.env` file:\n   ```bash\n   # Copy the template\n   cp .env.template .env\n\n   # Edit the file with your API keys\n   vi .env  # or use any editor\n   ```\n\n2. Using environment variables:\n   ```bash\n   export OPENAI_API_KEY=your_api_key_here\n   export ANTHROPIC_API_KEY=your_api_key_here\n   export GEMINI_API_KEY=your_api_key_here\n   export WEBSEARCH_API_KEY=your_api_key_here\n   ```\n\n3. Passing API keys as command-line arguments (not recommended for production):\n   ```bash\n   bazel run examples:simple_agent -- your_api_key_here\n   ```\n\nThe framework will check for API keys in the following order:\n1. `.env` file\n2. Environment variables\n3. Command-line arguments\n\n#### Python Tool Setup\nIn order to use the Python Code Execution Tool, ensure your Python environment is correctly configured so that the SDK can locate your Python runtime and libraries.\n```bash\nexport PYTHONHOME=$(python3 -c \"import sys; print(sys.prefix)\")\nexport PYTHONPATH=$(python3 -c \"import sysconfig; print(sysconfig.get_path('stdlib'))\")\n```\n\n### Usage\n\nHere's a simple example of creating and running an autonomous agent:\n\n```cpp\n#include \u003cagents-cpp/context.h\u003e\n#include \u003cagents-cpp/agents/autonomous_agent.h\u003e\n#include \u003cagents-cpp/llm_interface.h\u003e\n#include \u003cagents-cpp/tools/tool_registry.h\u003e\n\nusing namespace agents;\n\nint main() {\n    // Create LLM\n    auto llm = createLLM(\"anthropic\", \"\u003cyour_api_key_here\u003e\", \"claude-3-5-sonnet-20240620\");\n\n    // Create agent context\n    auto context = std::make_shared\u003cContext\u003e();\n    context-\u003esetLLM(llm);\n\n    // Register tools\n    context-\u003eregisterTool(tools::createWebSearchTool(llm));\n\n    // Create the agent\n    AutonomousAgent agent(context);\n    agent.setPlanningStrategy(AutonomousAgent::PlanningStrategy::REACT);\n\n    // Run the agent\n    JsonObject result = agent.run(\"Research the latest developments in quantum computing\");\n\n    // Access the result\n    std::cout \u003c\u003c result[\"answer\"].get\u003cstd::string\u003e() \u003c\u003c std::endl;\n\n    return 0;\n}\n```\n\n### Running Your First Example\n\nThe simplest way to start is with the `simple_agent` example, which creates a basic autonomous agent that can use tools to answer questions:\n\n1. Navigate to the release directory:\n   ```bash\n   cd agents-sdk\n   ```\n\n1. From the release directory, run the example:\n   ```bash\n   bazel run examples:simple_agent -- your_api_key_here\n   ```\n\n   Alternatively, you can set your API key as an environment variable:\n   ```bash\n   export OPENAI_API_KEY=your_api_key_here\n   bazel run examples:simple_agent your_api_key_here\n   ```\n\n2. Once running, you'll be prompted to enter a question or task. For example:\n   ```\n   Enter a question or task for the agent (or 'exit' to quit):\n   \u003e What's the current status of quantum computing research?\n   ```\n\n3. The agent will:\n   - Break down the task into steps\n   - Use tools (like web search) to gather information\n   - Ask for your approval before proceeding with certain steps (if human-in-the-loop is enabled)\n   - Provide a comprehensive answer\n\n4. Example output:\n   ```\n   Step: Planning how to approach the question\n   Status: Completed\n   Result: {\n     \"plan\": \"1. Search for recent quantum computing research developments...\"\n   }\n   --------------------------------------\n   Step: Searching for information on quantum computing research\n   Status: Waiting for approval\n   Context: {\"search_query\": \"current status quantum computing research 2024\"}\n   Approve this step? (y/n): y\n   ...\n   ```\n\n### Configuring the Example\n\nYou can modify examples/simple_agent.cpp to explore different configurations:\n\n- Change the LLM provider:\n  ```cpp\n  // For Anthropic Claude\n  auto llm = createLLM(\"anthropic\", api_key, \"claude-3-5-sonnet-20240620\");\n\n  // For Google Gemini\n  auto llm = createLLM(\"google\", api_key, \"gemini-pro\");\n  ```\n\n- Add different tools:\n  ```cpp\n  // Add more built-in tools\n  context-\u003eregisterTool(tools::createCalculatorTool());\n  context-\u003eregisterTool(tools::createPythonCodeExecutionTool());\n  ```\n\n- Change the planning strategy:\n  ```cpp\n  // Use ReAct planning (reasoning + acting)\n  agent.setPlanningStrategy(AutonomousAgent::PlanningStrategy::REACT);\n\n  // Or use CoT planning (chain-of-thought)\n  agent.setPlanningStrategy(AutonomousAgent::PlanningStrategy::COT);\n  ```\n\n## Included Examples\n\nThe repository includes several examples demonstrating different workflow patterns:\n\n| Example                       | Description                           |\n| ----------------------------- | --------------------------------------|\n| `simple_agent`                | Basic autonomous agent                |\n| `prompt_chain_example`        | Prompt chaining workflow              |\n| `routing_example`             | Multi-agent routing                   |\n| `parallel_example`            | Parallel task execution               |\n| `orchestrator_example`        | Orchestrator–worker pattern           |\n| `evaluator_optimizer_example` | Evaluator–optimizer feedback loop     |\n| `multimodal_example`          | Support for voice, audio, image, docs |\n| `autonomous_agent_example`    | Full-featured autonomous agent        |\n\nRun examples available:\n\n```bash\nbazel run examples:\u003csimple_agent\u003e -- your_api_key_here\n```\n\n## Project Structure\n\n- `lib/`: Public library for SDK\n- `include/agents-cpp/`: Public headers\n  - `types.h`: Common type definitions\n  - `context.h`: Context for agent execution\n  - `llm_interface.h`: Interface for LLM providers\n  - `tool.h`: Tool interface\n  - `memory.h`: Agent memory interface\n  - `workflow.h`: Base workflow interface\n  - `agent.h`: Base agent interface\n  - `workflows/`: Workflow pattern implementations\n  - `agents/`: Agent implementations\n  - `tools/`: Tool implementations\n  - `llms/`: LLM provider implementations\n- `bin/examples/`: Example applications\n\n## Extending the SDK\n\n### Adding Custom Tools\n\n```cpp\nauto custom_tool = createTool(\n    \"calculator\",\n    \"Evaluates mathematical expressions\",\n    {\n        {\"expression\", \"The expression to evaluate\", \"string\", true}\n    },\n    [](const JsonObject\u0026 params) -\u003e ToolResult {\n        std::string expr = params[\"expression\"];\n        // Implement calculation logic here\n        double result = evaluate(expr);\n        return ToolResult{\n            true,\n            \"Result: \" + std::to_string(result),\n            {{\"result\", result}}\n        };\n    }\n);\n\ncontext-\u003eregisterTool(custom_tool);\n```\n\n### Creating Custom Workflows\n\nYou can create custom workflows by extending the `Workflow` base class or combining existing workflows:\n\n```cpp\nclass CustomWorkflow : public Workflow {\npublic:\n    CustomWorkflow(std::shared_ptr\u003cContext\u003e context)\n        : Workflow(context) {}\n\n    JsonObject run(const std::string\u0026 input) override {\n        // Implement your custom workflow logic here\n    }\n};\n```\n\n## Running in Production?\n\nDon't let infrastructure slow you down. Our Pro version helps accelerate your roadmap with: \n\n* **MCP Support:** Enable your agent to utilize local and remote MCPs.\n* **Premium Tools:** Access the complete set of tools supported natively including: weather, research, wolfram-alpha, and more.\n* **Voice SDK:** Access to Edge AI's Speech-to-Text, Text-to-Speech, and Voice-Activity-Detection libraries and models.\n\n👉 **[Start a free Pro trial](https://runedge.ai/pricing)**\n\n## Support\n\n- Email: support@runedge.ai\n- Discord: https://discord.gg/D5unWmt8\n\n## Acknowledgements\n\nThis implementation is inspired by Anthropic's article [\"Building effective agents\"](https://www.anthropic.com/research/building-effective-agents) and re-engineered in C++ for real-time, low overhead usage on edge devices.\n\n## License\n\nThis project is licensed under an evaluation License - see the [LICENSE](./LICENSE.md) file for details.\n\n---\n\n\u003cp align=\"center\"\u003e\n  \u003cstrong\u003eThe future of AI is on-device\u003c/strong\u003e\u003cbr\u003e\n  Start with our samples and discover how we could empower the next generation of AI applications.\n\u003c/p\u003e\n","funding_links":["https://github.com/sponsors/amikhail48","https://buymeacoffee.com/amikhail"],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Frunedgeai%2Fagents-cpp-sdk","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Frunedgeai%2Fagents-cpp-sdk","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Frunedgeai%2Fagents-cpp-sdk/lists"}