https://github.com/simonblanke/gradient-free-optimizers
Simple and reliable optimization with local, global, population-based and sequential techniques in numerical discrete search spaces.
https://github.com/simonblanke/gradient-free-optimizers
bayesian-optimization blackbox-optimization constrained-optimization evolution-strategies gradient-free-optimization hill-climbing hyperactive hyperparameter-optimization machine-learning meta-heuristic nelder-mead optimization particle-swarm-optimization random-search simulated-annealing tree-of-parzen-estimator
Last synced: 16 days ago
JSON representation
Simple and reliable optimization with local, global, population-based and sequential techniques in numerical discrete search spaces.
- Host: GitHub
- URL: https://github.com/simonblanke/gradient-free-optimizers
- Owner: SimonBlanke
- License: mit
- Created: 2020-04-04T06:35:55.000Z (about 5 years ago)
- Default Branch: master
- Last Pushed: 2025-03-31T16:47:29.000Z (26 days ago)
- Last Synced: 2025-04-03T11:45:19.391Z (23 days ago)
- Topics: bayesian-optimization, blackbox-optimization, constrained-optimization, evolution-strategies, gradient-free-optimization, hill-climbing, hyperactive, hyperparameter-optimization, machine-learning, meta-heuristic, nelder-mead, optimization, particle-swarm-optimization, random-search, simulated-annealing, tree-of-parzen-estimator
- Language: Python
- Homepage: https://simonblanke.github.io/gradient-free-optimizers-documentation
- Size: 38.3 MB
- Stars: 1,222
- Watchers: 21
- Forks: 87
- Open Issues: 15
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
README
---
Simple and reliable optimization with local, global, population-based and sequential techniques in numerical discrete search spaces.
Master status:
![]()
![]()
![]()
![]()
Dev status:
![]()
![]()
![]()
![]()
Code quality:
![]()
![]()
Latest versions:
![]()
## Introduction
Gradient-Free-Optimizers provides a collection of easy to use optimization techniques,
whose objective function only requires an arbitrary score that gets maximized.
This makes gradient-free methods capable of solving various optimization problems, including:
- Optimizing arbitrary mathematical functions.
- Fitting multiple gauss-distributions to data.
- Hyperparameter-optimization of machine-learning methods.Gradient-Free-Optimizers is the optimization backend of Hyperactive (in v3.0.0 and higher) but it can also be used by itself as a leaner and simpler optimization toolkit.
---
---
## Main features
- Easy to use:
Simple API-design
You can optimize anything that can be defined in a python function. For example a simple parabola function:
```python
def objective_function(para):
score = para["x1"] * para["x1"]
return -score
```Define where to search via numpy ranges:
```python
search_space = {
"x": np.arange(0, 5, 0.1),
}
```That`s all the information the algorithm needs to search for the maximum in the objective function:
```python
from gradient_free_optimizers import RandomSearchOptimizeropt = RandomSearchOptimizer(search_space)
opt.search(objective_function, n_iter=100000)
```
Receive prepared information about ongoing and finished optimization runs
During the optimization you will receive ongoing information in a progress bar:
- current best score
- the position in the search space of the current best score
- the iteration when the current best score was found
- other information about the progress native to tqdm
- High performance:
Modern optimization techniques
Gradient-Free-Optimizers provides not just meta-heuristic optimization methods but also sequential model based optimizers like bayesian optimization, which delivers good results for expensive objetive functions like deep-learning models.
Lightweight backend
Even for the very simple parabola function the optimization time is about 60% of the entire iteration time when optimizing with random search. This shows, that (despite all its features) Gradient-Free-Optimizers has an efficient optimization backend without any unnecessary slowdown.
Save time with memory dictionary
Per default Gradient-Free-Optimizers will look for the current position in a memory dictionary before evaluating the objective function.
- If the position is not in the dictionary the objective function will be evaluated and the position and score is saved in the dictionary.
- If a position is already saved in the dictionary Gradient-Free-Optimizers will just extract the score from it instead of evaluating the objective function. This avoids reevaluating computationally expensive objective functions (machine- or deep-learning) and therefore saves time.
- High reliability:
Extensive testing
Gradient-Free-Optimizers is extensivly tested with more than 400 tests in 2500 lines of test code. This includes the testing of:
- Each optimization algorithm
- Each optimization parameter
- All attributes that are part of the public api
Performance test for each optimizer
Each optimization algorithm must perform above a certain threshold to be included. Poorly performing algorithms are reworked or scraped.
## Optimization algorithms:
Gradient-Free-Optimizers supports a variety of optimization algorithms, which can make choosing the right algorithm a tedious endeavor. The gifs in this section give a visual representation how the different optimization algorithms explore the search space and exploit the collected information about the search space for a convex and non-convex objective function. More detailed explanations of all optimization algorithms can be found in the [official documentation](https://simonblanke.github.io/gradient-free-optimizers-documentation).
### Local Optimization
Hill Climbing
Evaluates the score of n neighbours in an epsilon environment and moves to the best one.
Convex Function
Non-convex Function
![]()
![]()
Stochastic Hill Climbing
Adds a probability to the hill climbing to move to a worse position in the search-space to escape local optima.
Convex Function
Non-convex Function
![]()
![]()
Repulsing Hill Climbing
Hill climbing algorithm with the addition of increasing epsilon by a factor if no better neighbour was found.
Convex Function
Non-convex Function
![]()
![]()
Simulated Annealing
Adds a probability to the hill climbing to move to a worse position in the search-space to escape local optima with decreasing probability over time.
Convex Function
Non-convex Function
![]()
![]()
Downhill Simplex Optimization
Constructs a simplex from multiple positions that moves through the search-space by reflecting, expanding, contracting or shrinking.
Convex Function
Non-convex Function
![]()
![]()
### Global Optimization
Random Search
Moves to random positions in each iteration.
Convex Function
Non-convex Function
![]()
![]()
Grid Search
Grid-search that moves through search-space diagonal (with step-size=1) starting from a corner.
Convex Function
Non-convex Function
![]()
![]()
Random Restart Hill Climbing
Hill climbingm, that moves to a random position after n iterations.
Convex Function
Non-convex Function
![]()
![]()
Random Annealing
Hill Climbing, that has large epsilon at the start of the search decreasing over time.
Convex Function
Non-convex Function
![]()
![]()
Pattern Search
Creates cross-shaped collection of positions that move through search-space by moving as a whole towards optima or shrinking the cross.
Convex Function
Non-convex Function
![]()
![]()
Powell's Method
Optimizes each search-space dimension at a time with a hill-climbing algorithm.
Convex Function
Non-convex Function
![]()
![]()
### Population-Based Optimization
Parallel Tempering
Population of n simulated annealers, which occasionally swap transition probabilities.
Convex Function
Non-convex Function
![]()
![]()
Particle Swarm Optimization
Population of n particles attracting each other and moving towards the best particle.
Convex Function
Non-convex Function
![]()
![]()
Spiral Optimization
Population of n particles moving in a spiral pattern around the best position.
Convex Function
Non-convex Function
![]()
![]()
Genetic Algorithm
Evolutionary algorithm selecting the best individuals in the population, mixing their parameters to get new solutions.
Convex Function
Non-convex Function
![]()
![]()
Evolution Strategy
Population of n hill climbers occasionally mixing positional information and removing worst positions from population.
Convex Function
Non-convex Function
![]()
![]()
Differential Evolution
Improves a population of candidate solutions by creating trial vectors through the differential mutation of three randomly selected individuals.
Convex Function
Non-convex Function
![]()
![]()
### Sequential Model-Based Optimization
Bayesian Optimization
Gaussian process fitting to explored positions and predicting promising new positions.
Convex Function
Non-convex Function
![]()
![]()
Lipschitz Optimization
Calculates an upper bound from the distances of the previously explored positions to find new promising positions.
Convex Function
Non-convex Function
![]()
![]()
DIRECT algorithm
Separates search space into subspaces. It evaluates the center position of each subspace to decide which subspace to sepate further.
Convex Function
Non-convex Function
![]()
![]()
Tree of Parzen Estimators
Kernel density estimators fitting to good and bad explored positions and predicting promising new positions.
Convex Function
Non-convex Function
![]()
![]()
Forest Optimizer
Ensemble of decision trees fitting to explored positions and predicting promising new positions.
Convex Function
Non-convex Function
![]()
![]()
## Sideprojects and Tools
The following packages are designed to support Gradient-Free-Optimizers and expand its use cases.
| Package | Description |
|-------------------------------------------------------------------------------|--------------------------------------------------------------------------------------|
| [Search-Data-Collector](https://github.com/SimonBlanke/search-data-collector) | Simple tool to save search-data during or after the optimization run into csv-files. |
| [Search-Data-Explorer](https://github.com/SimonBlanke/search-data-explorer) | Visualize search-data with plotly inside a streamlit dashboard.If you want news about Gradient-Free-Optimizers and related projects you can follow me on [twitter](https://twitter.com/blanke_simon).
## Installation
[](https://badge.fury.io/py/gradient-free-optimizers)
The most recent version of Gradient-Free-Optimizers is available on PyPi:
```console
pip install gradient-free-optimizers
```
## Examples
Convex function
```python
import numpy as np
from gradient_free_optimizers import RandomSearchOptimizerdef parabola_function(para):
loss = para["x"] * para["x"]
return -losssearch_space = {"x": np.arange(-10, 10, 0.1)}
opt = RandomSearchOptimizer(search_space)
opt.search(parabola_function, n_iter=100000)
```Non-convex function
```python
import numpy as np
from gradient_free_optimizers import RandomSearchOptimizerdef ackley_function(pos_new):
x = pos_new["x1"]
y = pos_new["x2"]a1 = -20 * np.exp(-0.2 * np.sqrt(0.5 * (x * x + y * y)))
a2 = -np.exp(0.5 * (np.cos(2 * np.pi * x) + np.cos(2 * np.pi * y)))
score = a1 + a2 + 20
return -scoresearch_space = {
"x1": np.arange(-100, 101, 0.1),
"x2": np.arange(-100, 101, 0.1),
}opt = RandomSearchOptimizer(search_space)
opt.search(ackley_function, n_iter=30000)
```Machine learning example
```python
import numpy as np
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.datasets import load_winefrom gradient_free_optimizers import HillClimbingOptimizer
data = load_wine()
X, y = data.data, data.targetdef model(para):
gbc = GradientBoostingClassifier(
n_estimators=para["n_estimators"],
max_depth=para["max_depth"],
min_samples_split=para["min_samples_split"],
min_samples_leaf=para["min_samples_leaf"],
)
scores = cross_val_score(gbc, X, y, cv=3)return scores.mean()
search_space = {
"n_estimators": np.arange(20, 120, 1),
"max_depth": np.arange(2, 12, 1),
"min_samples_split": np.arange(2, 12, 1),
"min_samples_leaf": np.arange(1, 12, 1),
}opt = HillClimbingOptimizer(search_space)
opt.search(model, n_iter=50)
```Constrained Optimization example
```python
import numpy as np
from gradient_free_optimizers import RandomSearchOptimizerdef convex_function(pos_new):
score = -(pos_new["x1"] * pos_new["x1"] + pos_new["x2"] * pos_new["x2"])
return scoresearch_space = {
"x1": np.arange(-100, 101, 0.1),
"x2": np.arange(-100, 101, 0.1),
}def constraint_1(para):
# only values in 'x1' higher than -5 are valid
return para["x1"] > -5# put one or more constraints inside a list
constraints_list = [constraint_1]# pass list of constraints to the optimizer
opt = RandomSearchOptimizer(search_space, constraints=constraints_list)
opt.search(convex_function, n_iter=50)search_data = opt.search_data
# the search-data does not contain any samples where x1 is equal or below -5
print("\n search_data \n", search_data, "\n")
```
## Gradient Free Optimizers <=> Hyperactive
Gradient-Free-Optimizers was created as the optimization backend of the [Hyperactive package](https://github.com/SimonBlanke/Hyperactive). Therefore the algorithms are exactly the same in both packages and deliver the same results.
However you can still use Gradient-Free-Optimizers as a standalone package.
The separation of Gradient-Free-Optimizers from Hyperactive enables multiple advantages:
- Even easier to use than Hyperactive
- Separate and more thorough testing
- Other developers can easily use GFOs as an optimizaton backend if desired
- Better isolation from the complex information flow in Hyperactive. GFOs only uses positions and scores in a N-dimensional search-space. It returns only the new position after each iteration.
- a smaller and cleaner code base, if you want to explore my implementation of these optimization techniques.While Gradient-Free-Optimizers is relatively simple, Hyperactive is a more complex project with additional features to make optimization of computationally expensive models (like engineering simulation or machine-/deep-learning models) more convenient.
## Citation
@Misc{gfo2020,
author = {{Simon Blanke}},
title = {{Gradient-Free-Optimizers}: Simple and reliable optimization with local, global, population-based and sequential techniques in numerical search spaces.},
howpublished = {\url{https://github.com/SimonBlanke}},
year = {since 2020}
}
## License
Gradient-Free-Optimizers is licensed under the following License:
[](https://github.com/SimonBlanke/Gradient-Free-Optimizers/blob/master/LICENSE)