{"id":28561110,"url":"https://github.com/codelion/openevolve","last_synced_at":"2025-06-10T10:14:19.259Z","repository":{"id":294141158,"uuid":"984096689","full_name":"codelion/openevolve","owner":"codelion","description":"Open-source implementation of AlphaEvolve","archived":false,"fork":false,"pushed_at":"2025-06-04T05:40:46.000Z","size":1555,"stargazers_count":2152,"open_issues_count":8,"forks_count":230,"subscribers_count":28,"default_branch":"main","last_synced_at":"2025-06-04T10:28:01.213Z","etag":null,"topics":["alpha-evolve","alphacode","alphaevolve","coding-agent","deepmind","deepmind-lab","discovery","distributed-evolutionary-algorithms","evolutionary-algorithms","evolutionary-computation","genetic-algorithm","genetic-algorithms","iterative-methods","iterative-refinement","llm-engineering","llm-ensemble","llm-inference","openevolve","optimize"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/codelion.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2025-05-15T11:46:52.000Z","updated_at":"2025-06-04T09:56:52.000Z","dependencies_parsed_at":"2025-06-04T04:57:01.182Z","dependency_job_id":"ee3f3e03-60cb-4e94-b2ab-470a904e2554","html_url":"https://github.com/codelion/openevolve","commit_stats":null,"previous_names":["codelion/openevolve"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/codelion%2Fopenevolve","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/codelion%2Fopenevolve/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/codelion%2Fopenevolve/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/codelion%2Fopenevolve/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/codelion","download_url":"https://codeload.github.com/codelion/openevolve/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/codelion%2Fopenevolve/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":259054152,"owners_count":22798451,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["alpha-evolve","alphacode","alphaevolve","coding-agent","deepmind","deepmind-lab","discovery","distributed-evolutionary-algorithms","evolutionary-algorithms","evolutionary-computation","genetic-algorithm","genetic-algorithms","iterative-methods","iterative-refinement","llm-engineering","llm-ensemble","llm-inference","openevolve","optimize"],"created_at":"2025-06-10T10:14:18.568Z","updated_at":"2025-06-10T10:14:19.236Z","avatar_url":"https://github.com/codelion.png","language":"Python","readme":"# OpenEvolve\n\nAn open-source implementation of the AlphaEvolve system described in the Google DeepMind paper \"AlphaEvolve: A coding agent for scientific and algorithmic discovery\" (2025).\n\n![OpenEvolve Logo](openevolve-logo.png)\n\n## Overview\n\nOpenEvolve is an evolutionary coding agent that uses Large Language Models to optimize code through an iterative process. It orchestrates a pipeline of LLM-based code generation, evaluation, and selection to continuously improve programs for a variety of tasks.\n\nKey features:\n- Evolution of entire code files, not just single functions\n- Support for multiple programming languages\n- Supports OpenAI-compatible APIs for any LLM\n- Multi-objective optimization\n- Flexible prompt engineering\n- Distributed evaluation\n\n## How It Works\n\nOpenEvolve follows an evolutionary approach with the following components:\n\n![OpenEvolve Architecture](openevolve-architecture.png)\n\n1. **Prompt Sampler**: Creates context-rich prompts containing past programs, their scores, and problem descriptions\n2. **LLM Ensemble**: Generates code modifications via an ensemble of language models\n3. **Evaluator Pool**: Tests generated programs and assigns scores\n4. **Program Database**: Stores programs and their evaluation metrics, guiding future evolution\n\nThe controller orchestrates interactions between these components in an asynchronous pipeline, maximizing throughput to evaluate as many candidate solutions as possible.\n\n## Getting Started\n\n### Installation\n\nTo install natively, use:\n```bash\ngit clone https://github.com/codelion/openevolve.git\ncd openevolve\npip install -e .\n```\n\n### Quick Start\n\nWe use the OpenAI SDK, so you can use any LLM or provider that supports an OpenAI compatible API. Just set the `OPENAI_API_KEY` environment variable\nand update the `api_base` in config.yaml if you are using a provider other than OpenAI. For local models, you can use\nan inference server like [optillm](https://github.com/codelion/optillm).\n\n```python\nfrom openevolve import OpenEvolve\n\n# Initialize the system\nevolve = OpenEvolve(\n    initial_program_path=\"path/to/initial_program.py\",\n    evaluation_file=\"path/to/evaluator.py\",\n    config_path=\"path/to/config.yaml\"\n)\n\n# Run the evolution\nbest_program = await evolve.run(iterations=1000)\nprint(f\"Best program metrics:\")\nfor name, value in best_program.metrics.items():\n    print(f\"  {name}: {value:.4f}\")\n```\n\n### Command-Line Usage\n\nOpenEvolve can also be run from the command line:\n\n```bash\npython openevolve-run.py path/to/initial_program.py path/to/evaluator.py --config path/to/config.yaml --iterations 1000\n```\n\n### Resuming from Checkpoints\n\nOpenEvolve automatically saves checkpoints at intervals specified by the `checkpoint_interval` config parameter (default is 10 iterations). You can resume an evolution run from a saved checkpoint:\n\n```bash\npython openevolve-run.py path/to/initial_program.py path/to/evaluator.py \\\n  --config path/to/config.yaml \\\n  --checkpoint path/to/checkpoint_directory \\\n  --iterations 50\n```\n\nWhen resuming from a checkpoint:\n- The system loads all previously evolved programs and their metrics\n- Checkpoint numbering continues from where it left off (e.g., if loaded from checkpoint_50, the next checkpoint will be checkpoint_60)\n- All evolution state is preserved (best programs, feature maps, archives, etc.)\n- Each checkpoint directory contains a copy of the best program at that point in time\n\nExample workflow with checkpoints:\n\n```bash\n# Run for 50 iterations (creates checkpoints at iterations 10, 20, 30, 40, 50)\npython openevolve-run.py examples/function_minimization/initial_program.py \\\n  examples/function_minimization/evaluator.py \\\n  --iterations 50\n\n# Resume from checkpoint 50 for another 50 iterations (creates checkpoints at 60, 70, 80, 90, 100)\npython openevolve-run.py examples/function_minimization/initial_program.py \\\n  examples/function_minimization/evaluator.py \\\n  --checkpoint examples/function_minimization/openevolve_output/checkpoints/checkpoint_50 \\\n  --iterations 50\n```\n\n### Comparing Results Across Checkpoints\n\nEach checkpoint directory contains the best program found up to that point, making it easy to compare solutions over time:\n\n```\ncheckpoints/\n  checkpoint_10/\n    best_program.py         # Best program at iteration 10\n    best_program_info.json  # Metrics and details\n    programs/               # All programs evaluated so far\n    metadata.json           # Database state\n  checkpoint_20/\n    best_program.py         # Best program at iteration 20\n    ...\n```\n\nYou can compare the evolution of solutions by examining the best programs at different checkpoints:\n\n```bash\n# Compare best programs at different checkpoints\ndiff -u checkpoints/checkpoint_10/best_program.py checkpoints/checkpoint_20/best_program.py\n\n# Compare metrics\ncat checkpoints/checkpoint_*/best_program_info.json | grep -A 10 metrics\n```\n\n### Visualizing the evolution tree\n\nThe script in `scripts/visualize.py` allows you to visualize the evolution tree and display it in your webbrowser. The script watches live for the newest checkpoint directory in the examples/ folder structure and updates the graph. Alternatively, you can also provide a specific checkpoint folder with the `--path` parameter.\n\n```bash\n# Install requirements\npip install -r scripts/requirements.txt\n\n# Start the visualization web server and have it watch the examples/ folder\npython scripts/visualizer.py\n\n# Start the visualization web server with a specific checkpoint\npython scripts/visualizer.py --path examples/function_minimization/openevolve_output/checkpoints/checkpoint_100/\n```\n\nIn the visualization UI, you can\n- see the branching of your program evolution in a network visualization, with node radius chosen by the program fitness (= the currently selected metric),\n- see the parent-child relationship of nodes and click through them in the sidebar (use the yellow locator icon in the sidebar to center the node in the graph),\n- select the metric of interest (with the available metric choices depending on your data set),\n- highlight nodes, for example the top score (for the chosen metric) or the MAP-elites members,\n- click nodes to see their code and prompts (if available from the checkpoint data) in a sidebar,\n- in the \"Performance\" tab, see their selected metric score vs generation in a graph\n\n![OpenEvolve Visualizer](openevolve-visualizer.png)\n\n### Docker\n\nYou can also install and execute via Docker:\n```bash\ndocker build -t openevolve .\ndocker run --rm -v $(pwd):/app --network=\"host\" openevolve examples/function_minimization/initial_program.py examples/function_minimization/evaluator.py --config examples/function_minimization/config.yaml --iterations 1000\n```\n\n## Configuration\n\nOpenEvolve is highly configurable. You can specify configuration options in a YAML file:\n\n```yaml\n# Example configuration\nmax_iterations: 1000\nllm:\n  primary_model: \"gemini-2.0-flash-lite\"\n  secondary_model: \"gemini-2.0-flash\"\n  temperature: 0.7\ndatabase:\n  population_size: 500\n  num_islands: 5\n```\n\nSample configuration files are available in the `configs/` directory:\n- `default_config.yaml`: Comprehensive configuration with all available options\n\nSee the [Configuration Guide](configs/default_config.yaml) for a full list of options.\n\n## Artifacts Channel\n\nOpenEvolve includes a **artifacts side-channel** that allows evaluators to capture build errors, profiling results, etc. to provide better feedback to the LLM in subsequent generations. This feature enhances the evolution process by giving the LLM context about what went wrong and how to fix it.\n\nThe artifacts channel operates alongside the traditional fitness metrics.\n\n### Example: Compilation Failure Feedback\n\n```python\nfrom openevolve.evaluation_result import EvaluationResult\n\nreturn EvaluationResult(\n    metrics={\"compile_ok\": 0.0, \"score\": 0.0},\n    artifacts={\n        \"stderr\": \"SyntaxError: invalid syntax (line 15)\",\n        \"traceback\": \"...\",\n        \"failure_stage\": \"compilation\"\n    }\n)\n```\n\nThe next generation prompt will include:\n```\n## Last Execution Output\n### Stderr\n```\nSyntaxError: invalid syntax (line 15)\n```\n### Traceback\n```\n...\n```\n```\n\n### Configuration\n\nArtifacts can be controlled via configuration and environment variables:\n\n```yaml\n# config.yaml\nevaluator:\n  enable_artifacts: true\n\nprompt:\n  include_artifacts: true\n  max_artifact_bytes: 4096  # 4KB limit in prompts\n  artifact_security_filter: true\n```\n\n```bash\n# Environment variable to disable artifacts\nexport ENABLE_ARTIFACTS=false\n```\n\n### Benefits\n\n- **Faster convergence** - LLMs can see what went wrong and fix it directly\n- **Better error handling** - Compilation and runtime failures become learning opportunities  \n- **Rich debugging context** - Full stack traces and error messages guide improvements\n- **Zero overhead** - When disabled, no performance impact on evaluation\n\n## Examples\n\nSee the `examples/` directory for complete examples of using OpenEvolve on various problems:\n\n### Symbolic Regression\n\nA comprehensive example demonstrating OpenEvolve's application to symbolic regression tasks using the LLM-SRBench benchmark. This example shows how OpenEvolve can evolve simple mathematical expressions (like linear models) into complex symbolic formulas that accurately fit scientific datasets.\n\n[Explore the Symbolic Regression Example](examples/symbolic_regression/)\n\nKey features:\n- Automatic generation of initial programs from benchmark tasks\n- Evolution from simple linear models to complex mathematical expressions\n- Evaluation on physics, chemistry, biology, and material science datasets\n- Competitive results compared to state-of-the-art symbolic regression methods\n\n### Circle Packing\n\nOur implementation of the circle packing problem from the AlphaEvolve paper. For the n=26 case, where one needs to pack 26 circles in a unit square we also obtain SOTA results.\n\n[Explore the Circle Packing Example](examples/circle_packing/)\n\nWe have sucessfully replicated the results from the AlphaEvolve paper, below is the packing found by OpenEvolve after 800 iterations\n\n![alpha-evolve-replication](https://github.com/user-attachments/assets/00100f9e-2ac3-445b-9266-0398b7174193)\n\nThis is exactly the packing reported by AlphaEvolve in their paper (Figure 14):\n\n![alpha-evolve-results](https://github.com/user-attachments/assets/0c9affa5-053d-404e-bb2d-11479ab248c9)\n\n### Function Minimization\n\nAn example showing how OpenEvolve can transform a simple random search algorithm into a sophisticated simulated annealing approach.\n\n[Explore the Function Minimization Example](examples/function_minimization/)\n\n## Preparing Your Own Problems\n\nTo use OpenEvolve for your own problems:\n\n1. **Mark code sections** to evolve with `# EVOLVE-BLOCK-START` and `# EVOLVE-BLOCK-END` comments\n2. **Create an evaluation function** that returns a dictionary of metrics\n3. **Configure OpenEvolve** with appropriate parameters\n4. **Run the evolution** process\n\n## Citation\n\nIf you use OpenEvolve in your research, please cite:\n\n```\n@software{openevolve,\n  title = {OpenEvolve: Open-source implementation of AlphaEvolve},\n  author = {Asankhaya Sharma},\n  year = {2025},\n  publisher = {GitHub},\n  url = {https://github.com/codelion/openevolve}\n}\n```\n","funding_links":[],"categories":["Python","其他_机器学习与深度学习","Development Workflows \u0026 Agents","Orchestration","Frameworks \u0026 Platforms"],"sub_categories":["Other IDEs","Evolutionary Framework"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcodelion%2Fopenevolve","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcodelion%2Fopenevolve","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcodelion%2Fopenevolve/lists"}