{"id":28014145,"url":"https://github.com/orra-dev/orra","last_synced_at":"2025-05-10T03:01:34.000Z","repository":{"id":237205218,"uuid":"794028600","full_name":"orra-dev/orra","owner":"orra-dev","description":"Resilience for AI Agent workflows.","archived":false,"fork":false,"pushed_at":"2025-05-05T22:06:25.000Z","size":1990,"stargazers_count":194,"open_issues_count":27,"forks_count":9,"subscribers_count":8,"default_branch":"main","last_synced_at":"2025-05-05T23:19:56.406Z","etag":null,"topics":["agents","ai","ai-agents","ai-developer-tools","ai-in-production","durable-execution","go","golang","javascript-sdk","llm-apps","orchestrator","python-sdk","reasoning","reliability"],"latest_commit_sha":null,"homepage":"https://orra.dev","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mpl-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/orra-dev.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2024-04-30T10:17:14.000Z","updated_at":"2025-05-05T15:39:15.000Z","dependencies_parsed_at":"2024-05-07T15:30:49.672Z","dependency_job_id":"cbeeeae1-ec4a-40eb-9e2a-50e5b393d6c8","html_url":"https://github.com/orra-dev/orra","commit_stats":null,"previous_names":["grro-xyz/orra","ezodude/orra","orra-dev/orra"],"tags_count":11,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/orra-dev%2Forra","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/orra-dev%2Forra/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/orra-dev%2Forra/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/orra-dev%2Forra/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/orra-dev","download_url":"https://codeload.github.com/orra-dev/orra/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253356253,"owners_count":21895670,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agents","ai","ai-agents","ai-developer-tools","ai-in-production","durable-execution","go","golang","javascript-sdk","llm-apps","orchestrator","python-sdk","reasoning","reliability"],"created_at":"2025-05-10T03:01:02.000Z","updated_at":"2025-05-10T03:01:33.810Z","avatar_url":"https://github.com/orra-dev.png","language":"Go","funding_links":[],"categories":["Large Language Model"],"sub_categories":["DevTools"],"readme":"# 🪡 orra\n\nOrra is infrastructure for resilient AI agent workflows. It helps your agents recover from failures like API outages, failed evals, and more - keeping your workflows moving forward.\n\n![](images/orra-diagram.png)\n\nBy intelligently coordinating tasks across your agents, tools, and existing stack, orra ensures robust execution in any environment. It’s designed to work seamlessly with any language, agent framework, or deployment platform.\n\n* 🧠 Planning agent with automatic agent/service discovery\n* 🗿 Durable execution with state persistence\n* 🎯 Pre-validated execution plans\n* ↩️ Revert state to handle failures\n* 🕵 Audit logs for traceability\n* 🚀 Go fast and save cost with tools as services\n* ⛑️ Automatic health monitoring\n* 🔮 Real-time status tracking\n* 🏢 On-premises deployment\n* 🪝 Webhooks notifications for completions and failures\n\n[Learn why we built orra →](https://tinyurl.com/orra-launch-blog-post)\n\n### Coming Soon\n\n* Integration adapters for popular agent frameworks\n* Scale your workflows with reliable coverage\n* Planning course correction for failed evals\n* Agent replay and multi-LLM consensus planning\n* End-to-end encryption\n* Granular workflow access controls\n* Continuous adjustment of Agent workflows during runtime\n* Additional language SDKs - Ruby, DotNet and Go very soon!\n* MCP integration\n* SOC 2 and GDPR readiness to meet the needs of regulated industries\n\n## Table of Contents\n\n- [Installation](#installation)\n- [How The Plan Engine Works](#how-the-plan-engine-works)\n- [How orra Compares](#how-orra-compares)\n- [Guides](#guides)\n- [Explore Examples](#explore-examples)\n- [Docs](#docs)\n- [Self Hosting \u0026 On-premises Deployment](#self-hosting--on-premises-deployment)\n- [Support](#support)\n- [Telemetry](#telemetry)\n- [License](#license)\n\n## Installation\n\n### Prerequisites\n\n- [Docker](https://docs.docker.com/desktop/) and [Docker Compose](https://docs.docker.com/compose/install/) - For running the Plan Engine\n- Set up Reasoning and Embedding Models to power task planning and execution plan caching/validation\n\n#### Setup Models for Plan Engine\n\nSelect from a variety of supported models:\n\n**Reasoning Models**:\n- OpenAI's `o1-mini` or `o3-mini` on cloud\n- `deepseek-r1` or `qwq-32b` on cloud or self-hosted (on-premises or locally)\n\n**Embedding Models**:\n- OpenAI's `text-embedding-3-small` on cloud\n- `jina-embeddings-v2-small-en` on cloud or self-hosted (on-premises or locally)\n\n\u003e **Note**: The Plan Engine requires all model endpoints to be **OpenAI API-compatible**. Most model serving solutions (like vLLM, LMStudio, Ollama, etc.) can be configured to expose this compatible API format.\n\n**Quick Cloud Setup Example**:\n\nUpdate the .env based on the [_env](planengine/_env) file with one of these:\n\n```shell\n# OpenAI Reasoning\nLLM_MODEL=o1-mini\nLLM_API_KEY=your_api_key\nLLM_API_BASE_URL=https://api.openai.com/v1\n\n# OpenAI Embeddings\nEMBEDDINGS_MODEL=text-embedding-3-small\nEMBEDDINGS_API_KEY=your_api_key\nEMBEDDINGS_API_BASE_URL=https://api.openai.com/v1\n```\n\n**Self-hosted/On-premises Example**:\n\nUpdate the .env based on the [_env](planengine/_env) file with one of these:\n\n```shell\n# Self-hosted QwQ model\nLLM_MODEL=qwq-32b-q8\nLLM_API_KEY=your_internal_key  # Optional depending on your setup\nLLM_API_BASE_URL=http://your-internal-server:8000/v1\n\n# Self-hosted Jina embeddings\nEMBEDDINGS_MODEL=jina-embeddings-v2-small-en\nEMBEDDINGS_API_KEY=your_internal_key  # Optional depending on your setup\nEMBEDDINGS_API_BASE_URL=http://your-internal-server:8001/v1\n```\n\n→ [Complete Model Configuration Documentation](docs/model-configuration.md)\n\n### 1. Install orra CLI\n\nDownload the latest CLI binary for your platform from our [releases page](https://github.com/orra-dev/orra/releases):\n\n```shell\n# macOS\ncurl -L https://github.com/orra-dev/orra/releases/download/v0.2.6/orra-darwin-arm64 -o /usr/local/bin/orra\nchmod +x /usr/local/bin/orra\n\n# Linux\ncurl -L https://github.com/ezodude/orra/releases/download/v0.2.6/orra-linux-amd64 -o /usr/local/bin/orra\nchmod +x /usr/local/bin/orra\n\n# Verify installation\norra version\n```\n\n→ [Full CLI documentation](docs/cli.md)\n\n### 2. Get orra Plan Engine Running\n\nClone the repository and start the Plan Engine:\n\n```shell\ngit clone https://github.com/ezodude/orra.git\ncd orra/planengine\n\n# Start the Plan Engine\ndocker compose up --build\n```\n\n## How The Plan Engine Works\n\nThe Plan Engine powers your multi-agent applications through intelligent planning and reliable execution:\n\n### Progressive Planning Levels\n\n#### 1. Base Planning\n\nYour agents stay clean and simple (wrapped in the orra SDK):\n\n**Python**\n```python\nfrom orra import OrraAgent, Task\nfrom pydantic import BaseModel\n\nclass ResearchInput(BaseModel):\n    topic: str\n    depth: str\n\nclass ResearchOutput(BaseModel):\n    summary: str\n\nagent = OrraAgent(\n    name=\"research-agent\",\n    description=\"Researches topics using web search and knowledge base\",\n    url=\"https://api.orra.dev\",\n    api_key=\"sk-orra-...\"\n)\n\n@agent.handler()\nasync def research(task: Task[ResearchInput]) -\u003e ResearchOutput:\n    results = await run_research(task.input.topic, task.input.depth)\n    return ResearchOutput(summary=results.summary)\n```\n\n**JavaScript**\n```javascript\nimport { initAgent } from '@orra.dev/sdk';\n\nconst agent = initAgent({\n  name: 'research-agent',\n  orraUrl: process.env.ORRA_URL,  \n  orraKey: process.env.ORRA_API_KEY\n});\n\nawait agent.register({\n  description: 'Researches topics using web search and knowledge base',\n  schema: {\n    input: {\n      type: 'object',\n      properties: {\n        topic: { type: 'string' },\n        depth: { type: 'string' }\n      }\n    },\n    output: {\n      type: 'object',\n      properties: {\n        summary: { type: 'string' }\n      }\n    }\n  }\n});\n\nagent.start(async (task) =\u003e {\n  const results = await runResearch(task.input.topic, task.input.depth);\n  return { summary: results.summary };\n});\n```\n\nFeatures:\n* AI analyzes intent and creates execution plans that target your components\n* Automatic service discovery and coordination\n* Parallel execution where possible\n\n#### 2. Production Planning with Domain Grounding\n\n```yaml\n# Define domain constraints\nname: research-workflow\ndomain: content-generation\nuse-cases:\n  - action: \"Research topic {topic}\"\n    capabilities: \n      - \"Web search access\"\n      - \"Knowledge synthesis\"\nconstraints:\n  - \"Verify sources before synthesis\"\n  - \"Maximum research time: 10 minutes\"\n```\n\nFeatures:\n* Full semantic validation of execution plans\n* Capability matching and verification\n* Safety constraints enforcement\n* State transition validation\n\n#### 3. Reliable Execution\n\n```bash\n# Execute an action with the Plan Engine\norra verify run \"Research and summarize AI trends\" \\\n  --data topic:\"AI in 2024\" \\\n  --data depth:\"comprehensive\"\n```\n\nThe Plan Engine ensures:\n* Automatic service health monitoring\n* Stateful execution tracking\n* Built-in retries and recovery\n* Real-time status updates\n* Webhook events for result delivery and monitoring\n\n## How orra compares\n\nOrra takes a unique approach to AI workflow orchestration. Here's how it compares to other solutions:\n\n|                       | **orra**                                                                                                     | **Agent Frameworks**\u003cbr/\u003e(e.g. Mastra, LangGraph)                             | **Workflow Engines**\u003cbr/\u003e(e.g. Temporal, Inngest)        |\n|-----------------------|--------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------|----------------------------------------------------------|\n| **Purpose**           | Multi-agent coordination layer                                                                               | Build individual AI agents                                                    | Run pre-planned workflows                                |\n| **Planning Style**    | AI-driven plan generation                                                                                    | Hardcoded agent workflows                                                     | Manual workflow definition                               |\n| **Error Recovery**    | Auto-recovery without restart                                                                                | Try/catch manual handling                                                     | Config-based retry policies                              |\n| **Best For**          | Complex unpredictable workflows                                                                              | Single agent development                                                      | Repeatable business processes                            |\n| **Example Use**       | \"Deliver this product by Friday\" → dynamically coordinates research, inventory, delivery, and payment agents | \"Analyze this document\" → fixed steps of reading, extracting, and summarizing | \"Process new signup\" → predefined steps with retry logic |\n\nOrra is for building AI systems that need to adapt and recover when things go wrong, without brittle scripts or manual fixes.\n\n## Guides\n\n- [From Fragile to Production-Ready Multi-Agent App](https://github.com/orra-dev/agent-fragile-to-prod-guide)\n- [From Fragile to Production-Ready Multi-Agent App (with Cloudflare Agents)](https://github.com/orra-dev/agent-fragile-to-prod-guide-with-cf-agents)\n\n## Explore Examples\n\n- 🛒 [E-commerce AI Assistant (JavaScript)](examples/ecommerce-agent-app) - E-commerce customer service with a delivery specialized agent\n- 👻 [Ghostwriters (Python)](examples/crewai-ghostwriters) - Content generation example showcasing how to use orra with [CrewAI](https://www.crewai.com)\n- 📣 [Echo Tools as Service (JavaScript)](examples/echo-js) - Simple example showing core concepts using JS\n- 📣 [Echo Tools as Service (Python)](examples/echo-python) - Simple example showing core concepts using Python\n\n## Docs\n\n- [Rapid Multi-Agent App Development with orra](docs/rapid-agent-app-devlopment.md)\n- [What is an Agent in orra?](docs/what-is-agent.md)\n- [Orchestrating Actions with orra](docs/actions.md)\n- [Monitoring with Webhooks](docs/monitoring-with-webhooks.md)\n- [Domain Grounding Execution](docs/grounding.md)\n- [Execution Plan Caching](docs/plan-caching.md)\n- [Core Topics \u0026 Internals](docs/core.md)\n- [Model Configuration for the orra Plan Engine](docs/model-configuration.md)\n\n## Self Hosting \u0026 On-premises Deployment\n\n### Running Plan Engine\n\nThe orra Plan Engine is packaged with a [Dockerfile](planengine/Dockerfile) for easy deployment:\n\n- **Local Development**: [Run it as a single instance](#installation) using Docker or Docker Compose\n- **On-premises Deployment**: Deploy in your own infrastructure with your preferred orchestration system\n- **Cloud Service**: Run on managed container services like [Digital Ocean's App Platform](https://docs.digitalocean.com/products/app-platform/how-to/deploy-from-monorepo/) or any Kubernetes environment\n\n### Using Self-hosted Models (Remote or On-premises)\n\nThe Plan Engine fully supports self-hosted open-source models:\n\n- **Reasoning**: Deploy `deepseek-r1` or `qwq-32b` using your preferred model serving solution including on-premises \n- **Embeddings**: Self-host `jina-embeddings-v2-small-en` for complete control\n\n\u003e **Important**: Your model serving solution must expose an **OpenAI-compatible API**. Solutions like vLLM, LMStudio, Ollama with OpenAI compatibility mode, or Replicate all work great.\n\n→ [Complete Model Configuration Guide](docs/model-configuration.md)\n\n### Data Storage\n\nThe Plan Engine uses [BadgerDB](https://github.com/hypermodeinc/badger) embedded database to persist all state - operational information is queryable using the [orra CLI](docs/cli.md).\n\n[Book an office hours slot](https://cal.com/orra-dev/office-hours) to get help hosting or running orra's Plan Engine for production.\n\n## Support\n\nNeed help? We're here to support you:\n\n- Report a bug or request a feature by creating an [issue](https://github.com/orra-dev/orra/issues/new?template=bug-report-feature-request.yml)\n- Start a [discussion](https://github.com/orra-dev/orra/discussions) about your ideas or questions\n\n## Telemetry\n\nSee [telemetry.md](./docs/telemetry.md) for details on what is collected and how to opt out.\n\n## License\n\nOrra is MPL-2.0 licensed.\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Forra-dev%2Forra","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Forra-dev%2Forra","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Forra-dev%2Forra/lists"}