https://github.com/codelion/openevolve
Open-source implementation of AlphaEvolve
https://github.com/codelion/openevolve
alpha-evolve alphacode alphaevolve coding-agent deepmind deepmind-lab discovery distributed-evolutionary-algorithms evolutionary-algorithms evolutionary-computation genetic-algorithm genetic-algorithms iterative-methods iterative-refinement llm-engineering llm-ensemble llm-inference openevolve optimize
Last synced: 4 months ago
JSON representation
Open-source implementation of AlphaEvolve
- Host: GitHub
- URL: https://github.com/codelion/openevolve
- Owner: codelion
- License: apache-2.0
- Created: 2025-05-15T11:46:52.000Z (5 months ago)
- Default Branch: main
- Last Pushed: 2025-06-04T05:40:46.000Z (4 months ago)
- Last Synced: 2025-06-04T10:28:01.213Z (4 months ago)
- Topics: alpha-evolve, alphacode, alphaevolve, coding-agent, deepmind, deepmind-lab, discovery, distributed-evolutionary-algorithms, evolutionary-algorithms, evolutionary-computation, genetic-algorithm, genetic-algorithms, iterative-methods, iterative-refinement, llm-engineering, llm-ensemble, llm-inference, openevolve, optimize
- Language: Python
- Homepage:
- Size: 1.48 MB
- Stars: 2,152
- Watchers: 28
- Forks: 230
- Open Issues: 8
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
Awesome Lists containing this project
- awesome-AI-driven-development - OpenEvolve - Turn your LLMs into autonomous code optimizers that discover breakthrough algorithms (Development Workflows & Agents / Other IDEs)
README
# OpenEvolve
An open-source implementation of the AlphaEvolve system described in the Google DeepMind paper "AlphaEvolve: A coding agent for scientific and algorithmic discovery" (2025).

## Overview
OpenEvolve is an evolutionary coding agent that uses Large Language Models to optimize code through an iterative process. It orchestrates a pipeline of LLM-based code generation, evaluation, and selection to continuously improve programs for a variety of tasks.
Key features:
- Evolution of entire code files, not just single functions
- Support for multiple programming languages
- Supports OpenAI-compatible APIs for any LLM
- Multi-objective optimization
- Flexible prompt engineering
- Distributed evaluation## How It Works
OpenEvolve follows an evolutionary approach with the following components:

1. **Prompt Sampler**: Creates context-rich prompts containing past programs, their scores, and problem descriptions
2. **LLM Ensemble**: Generates code modifications via an ensemble of language models
3. **Evaluator Pool**: Tests generated programs and assigns scores
4. **Program Database**: Stores programs and their evaluation metrics, guiding future evolutionThe controller orchestrates interactions between these components in an asynchronous pipeline, maximizing throughput to evaluate as many candidate solutions as possible.
## Getting Started
### Installation
To install natively, use:
```bash
git clone https://github.com/codelion/openevolve.git
cd openevolve
pip install -e .
```### Quick Start
We use the OpenAI SDK, so you can use any LLM or provider that supports an OpenAI compatible API. Just set the `OPENAI_API_KEY` environment variable
and update the `api_base` in config.yaml if you are using a provider other than OpenAI. For local models, you can use
an inference server like [optillm](https://github.com/codelion/optillm).```python
from openevolve import OpenEvolve# Initialize the system
evolve = OpenEvolve(
initial_program_path="path/to/initial_program.py",
evaluation_file="path/to/evaluator.py",
config_path="path/to/config.yaml"
)# Run the evolution
best_program = await evolve.run(iterations=1000)
print(f"Best program metrics:")
for name, value in best_program.metrics.items():
print(f" {name}: {value:.4f}")
```### Command-Line Usage
OpenEvolve can also be run from the command line:
```bash
python openevolve-run.py path/to/initial_program.py path/to/evaluator.py --config path/to/config.yaml --iterations 1000
```### Resuming from Checkpoints
OpenEvolve automatically saves checkpoints at intervals specified by the `checkpoint_interval` config parameter (default is 10 iterations). You can resume an evolution run from a saved checkpoint:
```bash
python openevolve-run.py path/to/initial_program.py path/to/evaluator.py \
--config path/to/config.yaml \
--checkpoint path/to/checkpoint_directory \
--iterations 50
```When resuming from a checkpoint:
- The system loads all previously evolved programs and their metrics
- Checkpoint numbering continues from where it left off (e.g., if loaded from checkpoint_50, the next checkpoint will be checkpoint_60)
- All evolution state is preserved (best programs, feature maps, archives, etc.)
- Each checkpoint directory contains a copy of the best program at that point in timeExample workflow with checkpoints:
```bash
# Run for 50 iterations (creates checkpoints at iterations 10, 20, 30, 40, 50)
python openevolve-run.py examples/function_minimization/initial_program.py \
examples/function_minimization/evaluator.py \
--iterations 50# Resume from checkpoint 50 for another 50 iterations (creates checkpoints at 60, 70, 80, 90, 100)
python openevolve-run.py examples/function_minimization/initial_program.py \
examples/function_minimization/evaluator.py \
--checkpoint examples/function_minimization/openevolve_output/checkpoints/checkpoint_50 \
--iterations 50
```### Comparing Results Across Checkpoints
Each checkpoint directory contains the best program found up to that point, making it easy to compare solutions over time:
```
checkpoints/
checkpoint_10/
best_program.py # Best program at iteration 10
best_program_info.json # Metrics and details
programs/ # All programs evaluated so far
metadata.json # Database state
checkpoint_20/
best_program.py # Best program at iteration 20
...
```You can compare the evolution of solutions by examining the best programs at different checkpoints:
```bash
# Compare best programs at different checkpoints
diff -u checkpoints/checkpoint_10/best_program.py checkpoints/checkpoint_20/best_program.py# Compare metrics
cat checkpoints/checkpoint_*/best_program_info.json | grep -A 10 metrics
```### Visualizing the evolution tree
The script in `scripts/visualize.py` allows you to visualize the evolution tree and display it in your webbrowser. The script watches live for the newest checkpoint directory in the examples/ folder structure and updates the graph. Alternatively, you can also provide a specific checkpoint folder with the `--path` parameter.
```bash
# Install requirements
pip install -r scripts/requirements.txt# Start the visualization web server and have it watch the examples/ folder
python scripts/visualizer.py# Start the visualization web server with a specific checkpoint
python scripts/visualizer.py --path examples/function_minimization/openevolve_output/checkpoints/checkpoint_100/
```In the visualization UI, you can
- see the branching of your program evolution in a network visualization, with node radius chosen by the program fitness (= the currently selected metric),
- see the parent-child relationship of nodes and click through them in the sidebar (use the yellow locator icon in the sidebar to center the node in the graph),
- select the metric of interest (with the available metric choices depending on your data set),
- highlight nodes, for example the top score (for the chosen metric) or the MAP-elites members,
- click nodes to see their code and prompts (if available from the checkpoint data) in a sidebar,
- in the "Performance" tab, see their selected metric score vs generation in a graph
### Docker
You can also install and execute via Docker:
```bash
docker build -t openevolve .
docker run --rm -v $(pwd):/app --network="host" openevolve examples/function_minimization/initial_program.py examples/function_minimization/evaluator.py --config examples/function_minimization/config.yaml --iterations 1000
```## Configuration
OpenEvolve is highly configurable. You can specify configuration options in a YAML file:
```yaml
# Example configuration
max_iterations: 1000
llm:
primary_model: "gemini-2.0-flash-lite"
secondary_model: "gemini-2.0-flash"
temperature: 0.7
database:
population_size: 500
num_islands: 5
```Sample configuration files are available in the `configs/` directory:
- `default_config.yaml`: Comprehensive configuration with all available optionsSee the [Configuration Guide](configs/default_config.yaml) for a full list of options.
## Artifacts Channel
OpenEvolve includes a **artifacts side-channel** that allows evaluators to capture build errors, profiling results, etc. to provide better feedback to the LLM in subsequent generations. This feature enhances the evolution process by giving the LLM context about what went wrong and how to fix it.
The artifacts channel operates alongside the traditional fitness metrics.
### Example: Compilation Failure Feedback
```python
from openevolve.evaluation_result import EvaluationResultreturn EvaluationResult(
metrics={"compile_ok": 0.0, "score": 0.0},
artifacts={
"stderr": "SyntaxError: invalid syntax (line 15)",
"traceback": "...",
"failure_stage": "compilation"
}
)
```The next generation prompt will include:
```
## Last Execution Output
### Stderr
```
SyntaxError: invalid syntax (line 15)
```
### Traceback
```
...
```
```### Configuration
Artifacts can be controlled via configuration and environment variables:
```yaml
# config.yaml
evaluator:
enable_artifacts: trueprompt:
include_artifacts: true
max_artifact_bytes: 4096 # 4KB limit in prompts
artifact_security_filter: true
``````bash
# Environment variable to disable artifacts
export ENABLE_ARTIFACTS=false
```### Benefits
- **Faster convergence** - LLMs can see what went wrong and fix it directly
- **Better error handling** - Compilation and runtime failures become learning opportunities
- **Rich debugging context** - Full stack traces and error messages guide improvements
- **Zero overhead** - When disabled, no performance impact on evaluation## Examples
See the `examples/` directory for complete examples of using OpenEvolve on various problems:
### Symbolic Regression
A comprehensive example demonstrating OpenEvolve's application to symbolic regression tasks using the LLM-SRBench benchmark. This example shows how OpenEvolve can evolve simple mathematical expressions (like linear models) into complex symbolic formulas that accurately fit scientific datasets.
[Explore the Symbolic Regression Example](examples/symbolic_regression/)
Key features:
- Automatic generation of initial programs from benchmark tasks
- Evolution from simple linear models to complex mathematical expressions
- Evaluation on physics, chemistry, biology, and material science datasets
- Competitive results compared to state-of-the-art symbolic regression methods### Circle Packing
Our implementation of the circle packing problem from the AlphaEvolve paper. For the n=26 case, where one needs to pack 26 circles in a unit square we also obtain SOTA results.
[Explore the Circle Packing Example](examples/circle_packing/)
We have sucessfully replicated the results from the AlphaEvolve paper, below is the packing found by OpenEvolve after 800 iterations

This is exactly the packing reported by AlphaEvolve in their paper (Figure 14):

### Function Minimization
An example showing how OpenEvolve can transform a simple random search algorithm into a sophisticated simulated annealing approach.
[Explore the Function Minimization Example](examples/function_minimization/)
## Preparing Your Own Problems
To use OpenEvolve for your own problems:
1. **Mark code sections** to evolve with `# EVOLVE-BLOCK-START` and `# EVOLVE-BLOCK-END` comments
2. **Create an evaluation function** that returns a dictionary of metrics
3. **Configure OpenEvolve** with appropriate parameters
4. **Run the evolution** process## Citation
If you use OpenEvolve in your research, please cite:
```
@software{openevolve,
title = {OpenEvolve: Open-source implementation of AlphaEvolve},
author = {Asankhaya Sharma},
year = {2025},
publisher = {GitHub},
url = {https://github.com/codelion/openevolve}
}
```