{"id":29649874,"url":"https://github.com/contextlab/orchestrator","last_synced_at":"2025-07-22T04:35:46.585Z","repository":{"id":304200290,"uuid":"1017449767","full_name":"ContextLab/orchestrator","owner":"ContextLab","description":"A convenient wrapper for LangGraph, MCP, model spec, and other AI agent control systems","archived":false,"fork":false,"pushed_at":"2025-07-18T16:37:05.000Z","size":10437,"stargazers_count":1,"open_issues_count":15,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2025-07-18T19:47:22.599Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/ContextLab.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2025-07-10T14:49:35.000Z","updated_at":"2025-07-18T16:37:09.000Z","dependencies_parsed_at":"2025-07-11T17:49:46.587Z","dependency_job_id":"9dfb5716-3d30-46dc-b47f-59f746f442f4","html_url":"https://github.com/ContextLab/orchestrator","commit_stats":null,"previous_names":["contextlab/orchestrator"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/ContextLab/orchestrator","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ContextLab%2Forchestrator","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ContextLab%2Forchestrator/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ContextLab%2Forchestrator/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ContextLab%2Forchestrator/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/ContextLab","download_url":"https://codeload.github.com/ContextLab/orchestrator/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ContextLab%2Forchestrator/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":266428596,"owners_count":23927028,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-07-22T02:00:09.085Z","response_time":66,"last_error":null,"robots_txt_status":null,"robots_txt_updated_at":null,"robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-07-22T04:35:43.014Z","updated_at":"2025-07-22T04:35:46.537Z","avatar_url":"https://github.com/ContextLab.png","language":"Python","readme":"# Orchestrator Framework\n\n[![PyPI Version](https://img.shields.io/pypi/v/py-orc)](https://pypi.org/project/py-orc/)\n[![Python Versions](https://img.shields.io/pypi/pyversions/py-orc)](https://pypi.org/project/py-orc/)\n[![Downloads](https://img.shields.io/pypi/dm/py-orc)](https://pypi.org/project/py-orc/)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://github.com/ContextLab/orchestrator/blob/main/LICENSE)\n[![Tests](https://github.com/ContextLab/orchestrator/actions/workflows/tests.yml/badge.svg)](https://github.com/ContextLab/orchestrator/actions/workflows/tests.yml)\n[![Coverage](https://github.com/ContextLab/orchestrator/actions/workflows/coverage.yml/badge.svg)](https://github.com/ContextLab/orchestrator/actions/workflows/coverage.yml)\n[![Documentation](https://readthedocs.org/projects/orc/badge/?version=latest)](https://orc.readthedocs.io/en/latest/?badge=latest)\n\n## Overview\n\nOrchestrator is a powerful, flexible AI pipeline orchestration framework that simplifies the creation and execution of complex AI workflows. By combining YAML-based configuration with intelligent model selection and automatic ambiguity resolution, Orchestrator makes it easy to build sophisticated AI applications without getting bogged down in implementation details.\n\n### Key Features\n\n- 🎯 **YAML-Based Pipelines**: Define complex workflows in simple, readable YAML with full template variable support\n- 🤖 **Multi-Model Support**: Seamlessly work with OpenAI, Anthropic, Google, Ollama, and HuggingFace models\n- 🧠 **Intelligent Model Selection**: Automatically choose the best model based on task requirements\n- 🔄 **Automatic Ambiguity Resolution**: Use `\u003cAUTO\u003e` tags to let AI resolve configuration ambiguities\n- 📦 **Modular Architecture**: Extend with custom models, tools, and control systems\n- 🛡️ **Production Ready**: Built-in error handling, retries, checkpointing, and monitoring\n- ⚡ **Parallel Execution**: Efficient resource management and parallel task execution\n- 🐳 **Sandboxed Execution**: Secure code execution in isolated environments\n- 💾 **Lazy Model Loading**: Models are downloaded only when needed, saving disk space\n- 🔧 **Reliable Tool Execution**: Guaranteed execution of file operations with LangChain structured outputs\n- 📝 **Advanced Templates**: Support for nested variables, filters, and Jinja2-style templates\n\n## Quick Start\n\n### Installation\n\n```bash\npip install py-orc\n```\n\nFor additional features:\n```bash\npip install py-orc[ollama]      # Ollama model support\npip install py-orc[cloud]        # Cloud model providers\npip install py-orc[dev]          # Development tools\npip install py-orc[all]          # Everything\n```\n\n### API Key Configuration\n\nOrchestrator supports multiple AI providers. Configure your API keys using the interactive setup:\n\n```bash\n# Interactive setup for all providers\norchestrator keys setup\n\n# Or add individual keys\norchestrator keys add openai\norchestrator keys add anthropic\norchestrator keys add google\norchestrator keys add huggingface\n\n# Check configured providers\norchestrator keys list\n\n# Validate your configuration\norchestrator keys validate\n```\n\nAPI keys are stored securely in `~/.orchestrator/.env` with file permissions set to 600 (owner read/write only).\n\n#### Required Environment Variables\n\nIf you prefer to set environment variables manually:\n\n- `OPENAI_API_KEY` - OpenAI API key (for GPT models)\n- `ANTHROPIC_API_KEY` - Anthropic API key (for Claude models)\n- `GOOGLE_AI_API_KEY` - Google AI API key (for Gemini models)\n- `HF_TOKEN` - Hugging Face token (for HuggingFace models)\n\n**Note**: Ollama models run locally and don't require API keys. They will be downloaded automatically on first use.\n\n### Basic Usage\n\n1. **Create a simple pipeline** (`hello_world.yaml`):\n\n```yaml\nid: hello_world\nname: Hello World Pipeline\ndescription: A simple example pipeline\n\nsteps:\n  - id: greet\n    action: generate_text\n    parameters:\n      prompt: \"Say hello to the world in a creative way!\"\n      \n  - id: translate\n    action: generate_text\n    parameters:\n      prompt: \"Translate this greeting to Spanish: {{ greet.result }}\"\n    dependencies: [greet]\n\noutputs:\n  greeting: \"{{ greet.result }}\"\n  spanish: \"{{ translate.result }}\"\n```\n\n2. **Run the pipeline**:\n\n```bash\n# Using the CLI script\npython scripts/run_pipeline.py hello_world.yaml\n\n# With inputs\npython scripts/run_pipeline.py hello_world.yaml -i name=World -i language=Spanish\n\n# From a JSON file\npython scripts/run_pipeline.py hello_world.yaml -f inputs.json -o output_dir/\n\n# Or programmatically\nimport orchestrator as orc\n\n# Initialize models (auto-detects available models)\norc.init_models()\n\n# Compile and run the pipeline\npipeline = orc.compile(\"hello_world.yaml\")\nresult = pipeline.run()\n\nprint(result)\n```\n\n### Using AUTO Tags\n\nOrchestrator's `\u003cAUTO\u003e` tags let AI decide configuration details:\n\n```yaml\nsteps:\n  - id: analyze_data\n    action: analyze\n    parameters:\n      data: \"{{ input_data }}\"\n      method: \u003cAUTO\u003eChoose the best analysis method for this data type\u003c/AUTO\u003e\n      visualization: \u003cAUTO\u003eDecide if we should create a chart\u003c/AUTO\u003e\n```\n\n## Model Configuration\n\nConfigure available models in `models.yaml`:\n\n```yaml\nmodels:\n  # Local models (via Ollama) - downloaded on first use\n  - source: ollama\n    name: llama3.1:8b\n    expertise: [general, reasoning, multilingual]\n    size: 8b\n    \n  - source: ollama\n    name: qwen2.5-coder:7b\n    expertise: [code, programming]\n    size: 7b\n\n  # Cloud models\n  - source: openai\n    name: gpt-4o\n    expertise: [general, reasoning, code, analysis, vision]\n    size: 1760b  # Estimated\n\ndefaults:\n  expertise_preferences:\n    code: qwen2.5-coder:7b\n    reasoning: deepseek-r1:8b\n    fast: llama3.2:1b\n```\n\nModels are downloaded only when first used, saving disk space and initialization time.\n\n## Advanced Example\n\nHere's a more complex example showing model requirements and parallel execution:\n\n```yaml\nid: research_pipeline\nname: AI Research Pipeline\ndescription: Research a topic and create a comprehensive report\n\ninputs:\n  - name: topic\n    type: string\n    description: Research topic\n    \n  - name: depth\n    type: string\n    default: \u003cAUTO\u003eDetermine appropriate research depth\u003c/AUTO\u003e\n\nsteps:\n  # Parallel research from multiple sources\n  - id: web_search\n    action: search_web\n    parameters:\n      query: \"{{ topic }} latest research 2025\"\n      count: \u003cAUTO\u003eDecide how many results to fetch\u003c/AUTO\u003e\n    requires_model:\n      expertise: [research, web]\n      \n  - id: academic_search\n    action: search_academic\n    parameters:\n      query: \"{{ topic }}\"\n      filters: \u003cAUTO\u003eSet appropriate academic filters\u003c/AUTO\u003e\n    requires_model:\n      expertise: [research, academic]\n      \n  # Analyze findings with specialized model\n  - id: analyze_findings\n    action: analyze\n    parameters:\n      web_results: \"{{ web_search.results }}\"\n      academic_results: \"{{ academic_search.results }}\"\n      analysis_focus: \u003cAUTO\u003eDetermine key aspects to analyze\u003c/AUTO\u003e\n    dependencies: [web_search, academic_search]\n    requires_model:\n      expertise: [analysis, reasoning]\n      min_size: 20b  # Require large model for complex analysis\n      \n  # Generate report\n  - id: write_report\n    action: generate_document\n    parameters:\n      topic: \"{{ topic }}\"\n      analysis: \"{{ analyze_findings.result }}\"\n      style: \u003cAUTO\u003eChoose appropriate writing style\u003c/AUTO\u003e\n      length: \u003cAUTO\u003eDetermine optimal report length\u003c/AUTO\u003e\n    dependencies: [analyze_findings]\n    requires_model:\n      expertise: [writing, general]\n\noutputs:\n  report: \"{{ write_report.document }}\"\n  summary: \"{{ analyze_findings.summary }}\"\n```\n\n## Complete Example: Research Report Generator\n\nHere's a fully functional pipeline that generates research reports:\n\n```yaml\n# research_report.yaml\nid: research_report\nname: Research Report Generator\ndescription: Generate comprehensive research reports with citations\n\ninputs:\n  - name: topic\n    type: string\n    description: Research topic\n  - name: instructions\n    type: string\n    description: Additional instructions for the report\n\noutputs:\n  - pdf: \u003cAUTO\u003eGenerate appropriate filename for the research report PDF\u003c/AUTO\u003e\n\nsteps:\n  - id: search\n    name: Web Search\n    action: search_web\n    parameters:\n      query: \u003cAUTO\u003eCreate effective search query for {topic} with {instructions}\u003c/AUTO\u003e\n      max_results: 10\n    requires_model:\n      expertise: fast\n      \n  - id: compile_notes\n    name: Compile Research Notes\n    action: generate_text\n    parameters:\n      prompt: |\n        Compile comprehensive research notes from these search results:\n        {{ search.results }}\n        \n        Topic: {{ topic }}\n        Instructions: {{ instructions }}\n        \n        Create detailed notes with:\n        - Key findings\n        - Important quotes\n        - Source citations\n        - Relevant statistics\n    dependencies: [search]\n    requires_model:\n      expertise: [analysis, reasoning]\n      min_size: 7b\n      \n  - id: write_report\n    name: Write Report\n    action: generate_document\n    parameters:\n      content: |\n        Write a comprehensive research report on \"{{ topic }}\"\n        \n        Research notes:\n        {{ compile_notes.result }}\n        \n        Requirements:\n        - Professional academic style\n        - Include introduction, body sections, and conclusion\n        - Cite sources properly\n        - {{ instructions }}\n      format: markdown\n    dependencies: [compile_notes]\n    requires_model:\n      expertise: [writing, general]\n      min_size: 20b\n      \n  - id: create_pdf\n    name: Create PDF\n    action: convert_to_pdf\n    parameters:\n      markdown: \"{{ write_report.document }}\"\n      filename: \"{{ outputs.pdf }}\"\n    dependencies: [write_report]\n```\n\nRun it with:\n\n```python\nimport orchestrator as orc\n\n# Initialize models\norc.init_models()\n\n# Compile pipeline\npipeline = orc.compile(\"research_report.yaml\")\n\n# Run with inputs\nresult = pipeline.run(\n    topic=\"quantum computing applications in medicine\",\n    instructions=\"Focus on recent breakthroughs and future potential\"\n)\n\nprint(f\"Report saved to: {result}\")\n```\n\n## Documentation\n\nComprehensive documentation is available at [orc.readthedocs.io](https://orc.readthedocs.io/), including:\n\n- [Getting Started Guide](https://orc.readthedocs.io/en/latest/getting_started/quickstart.html)\n- [YAML Configuration Reference](https://orc.readthedocs.io/en/latest/user_guide/yaml_configuration.html)\n- [Model Configuration](https://orc.readthedocs.io/en/latest/user_guide/model_configuration.html)\n- [API Reference](https://orc.readthedocs.io/en/latest/api/core.html)\n- [Examples and Tutorials](https://orc.readthedocs.io/en/latest/tutorials/examples.html)\n\n## Available Models\n\nOrchestrator supports a wide range of models:\n\n### Local Models (via Ollama)\n- **Gemma3 27B**: Google's powerful general-purpose model\n- **Llama 3.x**: General purpose, multilingual support\n- **DeepSeek-R1**: Advanced reasoning and coding\n- **Qwen2.5-Coder**: Specialized for code generation\n- **Mistral**: Fast and efficient general purpose\n\n### Cloud Models\n- **OpenAI**: GPT-4.1 (latest)\n- **Anthropic**: Claude Sonnet 4 (claude-sonnet-4-20250514)\n- **Google**: Gemini 2.5 Flash (gemini-2.5-flash)\n\n### HuggingFace Models\n- **Mistral 7B Instruct v0.3**: High-quality instruction-following model\n- Llama, Qwen, Phi, and many more\n- Automatically downloaded on first use\n\n## Requirements\n\n- Python 3.8+\n- Optional: Ollama for local model execution\n- Optional: API keys for cloud providers (OpenAI, Anthropic, Google)\n\n## Contributing\n\nWe welcome contributions! Please see our [Contributing Guide](https://github.com/ContextLab/orchestrator/blob/main/CONTRIBUTING.md) for details.\n\n## Support\n\n- 📚 [Documentation](https://orc.readthedocs.io/)\n- 🐛 [Issue Tracker](https://github.com/ContextLab/orchestrator/issues)\n- 💬 [Discussions](https://github.com/ContextLab/orchestrator/discussions)\n- 📧 Email: contextualdynamics@gmail.com\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](https://github.com/ContextLab/orchestrator/blob/main/LICENSE) file for details.\n\n## Citation\n\nIf you use Orchestrator in your research, please cite:\n\n```bibtex\n@software{orchestrator2025,\n  title = {Orchestrator: AI Pipeline Orchestration Framework},\n  author = {Manning, Jeremy R. and {Contextual Dynamics Lab}},\n  year = {2025},\n  url = {https://github.com/ContextLab/orchestrator},\n  organization = {Dartmouth College}\n}\n```\n\n## Acknowledgments\n\nOrchestrator is developed and maintained by the [Contextual Dynamics Lab](https://www.context-lab.com/) at Dartmouth College.\n\n---\n\n*Built with ❤️ by the Contextual Dynamics Lab*","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcontextlab%2Forchestrator","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcontextlab%2Forchestrator","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcontextlab%2Forchestrator/lists"}