{"id":20040766,"url":"https://github.com/simonblanke/hyperactive","last_synced_at":"2025-05-14T19:07:13.474Z","repository":{"id":34945696,"uuid":"155687643","full_name":"SimonBlanke/Hyperactive","owner":"SimonBlanke","description":"An optimization and data collection toolbox for convenient and fast prototyping of computationally expensive models.","archived":false,"fork":false,"pushed_at":"2025-05-05T16:32:05.000Z","size":31965,"stargazers_count":518,"open_issues_count":21,"forks_count":48,"subscribers_count":10,"default_branch":"master","last_synced_at":"2025-05-14T19:07:10.776Z","etag":null,"topics":["automated-machine-learning","bayesian-optimization","data-science","deep-learning","feature-engineering","hyperactive","hyperparameter-optimization","keras","machine-learning","model-selection","neural-architecture-search","optimization","parallel-computing","parameter-tuning","python","pytorch","scikit-learn","xgboost"],"latest_commit_sha":null,"homepage":"https://simonblanke.github.io/hyperactive-documentation","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/SimonBlanke.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2018-11-01T08:53:30.000Z","updated_at":"2025-05-09T09:12:06.000Z","dependencies_parsed_at":"2023-10-14T20:50:50.348Z","dependency_job_id":"747d7d5c-1cda-4b83-be63-3d01efecb2cc","html_url":"https://github.com/SimonBlanke/Hyperactive","commit_stats":{"total_commits":2217,"total_committers":9,"mean_commits":"246.33333333333334","dds":0.006765899864682012,"last_synced_commit":"5be519d6a44dbb190d2f238627b848658d0a81e3"},"previous_names":[],"tags_count":34,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SimonBlanke%2FHyperactive","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SimonBlanke%2FHyperactive/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SimonBlanke%2FHyperactive/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SimonBlanke%2FHyperactive/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/SimonBlanke","download_url":"https://codeload.github.com/SimonBlanke/Hyperactive/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254209859,"owners_count":22032897,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["automated-machine-learning","bayesian-optimization","data-science","deep-learning","feature-engineering","hyperactive","hyperparameter-optimization","keras","machine-learning","model-selection","neural-architecture-search","optimization","parallel-computing","parameter-tuning","python","pytorch","scikit-learn","xgboost"],"created_at":"2024-11-13T10:43:44.031Z","updated_at":"2025-05-14T19:07:12.163Z","avatar_url":"https://github.com/SimonBlanke.png","language":"Python","readme":"\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://github.com/SimonBlanke/Hyperactive\"\u003e\u003cimg src=\"./docs/images/logo.png\" height=\"250\"\u003e\u003c/a\u003e\n\u003c/p\u003e\n\n\u003cbr\u003e\n\n---\n\n\u003ch2 align=\"center\"\u003eAn optimization and data collection toolbox for convenient and fast prototyping of computationally expensive models.\u003c/h2\u003e\n\n\u003cbr\u003e\n\n\n\n\u003ctable\u003e\n  \u003ctbody\u003e\n    \u003ctr align=\"left\" valign=\"center\"\u003e\n      \u003ctd\u003e\n        \u003cstrong\u003eMaster status:\u003c/strong\u003e\n      \u003c/td\u003e\n      \u003ctd\u003e\n        \u003ca href=\"https://github.com/SimonBlanke/Hyperactive/actions\"\u003e\n          \u003cimg src=\"https://github.com/SimonBlanke/Hyperactive/actions/workflows/tests_ubuntu.yml/badge.svg?branch=master\" alt=\"img not loaded: try F5 :)\"\u003e\n        \u003c/a\u003e\n        \u003ca href=\"https://github.com/SimonBlanke/Hyperactive/actions\"\u003e\n          \u003cimg src=\"https://github.com/SimonBlanke/Hyperactive/actions/workflows/tests_windows.yml/badge.svg?branch=master\" alt=\"img not loaded: try F5 :)\"\u003e\n        \u003c/a\u003e\n        \u003ca href=\"https://github.com/SimonBlanke/Hyperactive/actions\"\u003e\n          \u003cimg src=\"https://github.com/SimonBlanke/Hyperactive/actions/workflows/tests_macos.yml/badge.svg?branch=master\" alt=\"img not loaded: try F5 :)\"\u003e\n        \u003c/a\u003e\n        \u003ca href=\"https://app.codecov.io/gh/SimonBlanke/Hyperactive\"\u003e\n          \u003cimg src=\"https://img.shields.io/codecov/c/github/SimonBlanke/Hyperactive/master\" alt=\"img not loaded: try F5 :)\"\u003e\n        \u003c/a\u003e\n      \u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr align=\"left\" valign=\"center\"\u003e\n      \u003ctd\u003e\n        \u003cstrong\u003eDev status:\u003c/strong\u003e\n      \u003c/td\u003e\n      \u003ctd\u003e\n        \u003ca href=\"https://github.com/SimonBlanke/Hyperactive/actions\"\u003e\n          \u003cimg src=\"https://github.com/SimonBlanke/Hyperactive/actions/workflows/tests_ubuntu.yml/badge.svg?branch=dev\" alt=\"img not loaded: try F5 :)\"\u003e\n        \u003c/a\u003e\n        \u003ca href=\"https://github.com/SimonBlanke/Hyperactive/actions\"\u003e\n          \u003cimg src=\"https://github.com/SimonBlanke/Hyperactive/actions/workflows/tests_windows.yml/badge.svg?branch=dev\" alt=\"img not loaded: try F5 :)\"\u003e\n        \u003c/a\u003e\n        \u003ca href=\"https://github.com/SimonBlanke/Hyperactive/actions\"\u003e\n          \u003cimg src=\"https://github.com/SimonBlanke/Hyperactive/actions/workflows/tests_macos.yml/badge.svg?branch=dev\" alt=\"img not loaded: try F5 :)\"\u003e\n        \u003c/a\u003e\n        \u003ca href=\"https://app.codecov.io/gh/SimonBlanke/Hyperactive\"\u003e\n          \u003cimg src=\"https://img.shields.io/codecov/c/github/SimonBlanke/Hyperactive/dev\" alt=\"img not loaded: try F5 :)\"\u003e\n        \u003c/a\u003e\n      \u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr align=\"left\" valign=\"center\"\u003e\n      \u003ctd\u003e\n         \u003cstrong\u003eCode quality:\u003c/strong\u003e\n      \u003c/td\u003e\n      \u003ctd\u003e\n        \u003ca href=\"https://codeclimate.com/github/SimonBlanke/Hyperactive\"\u003e\n        \u003cimg src=\"https://img.shields.io/codeclimate/maintainability/SimonBlanke/Hyperactive?style=flat-square\u0026logo=code-climate\" alt=\"img not loaded: try F5 :)\"\u003e\n        \u003c/a\u003e\n        \u003ca href=\"https://scrutinizer-ci.com/g/SimonBlanke/Hyperactive/\"\u003e\n        \u003cimg src=\"https://img.shields.io/scrutinizer/quality/g/SimonBlanke/Hyperactive?style=flat-square\u0026logo=scrutinizer-ci\" alt=\"img not loaded: try F5 :)\"\u003e\n        \u003c/a\u003e\n      \u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr align=\"left\" valign=\"center\"\u003e\n      \u003ctd\u003e\n        \u003cstrong\u003eLatest versions:\u003c/strong\u003e\n      \u003c/td\u003e\n      \u003ctd\u003e\n        \u003ca href=\"https://pypi.org/project/gradient_free_optimizers/\"\u003e\n          \u003cimg src=\"https://img.shields.io/pypi/v/Hyperactive?style=flat-square\u0026logo=PyPi\u0026logoColor=white\u0026color=blue\" alt=\"img not loaded: try F5 :)\"\u003e\n        \u003c/a\u003e\n      \u003c/td\u003e\n    \u003c/tr\u003e\n  \u003c/tbody\u003e\n\u003c/table\u003e\n\n\u003cbr\u003e\n\n\n\n\n\u003cimg src=\"./docs/images/bayes_convex.gif\" align=\"right\" width=\"500\"\u003e\n\n## Hyperactive:\n\n- is [very easy](#hyperactive-is-very-easy-to-use) to learn but [extremly versatile](./examples/optimization_applications/search_space_example.py)\n\n- provides intelligent [optimization algorithms](#overview), support for all major [machine-learning frameworks](#overview) and many interesting [applications](#overview)\n\n- makes optimization [data collection](./examples/optimization_applications/meta_data_collection.py) simple\n\n- saves your [computation time](./examples/optimization_applications/memory.py)\n\n- supports [parallel computing](./examples/tested_and_supported_packages/multiprocessing_example.py)\n\n\n\n\n\u003cbr\u003e\n\u003cbr\u003e\n\u003cbr\u003e\n\n\nAs its name suggests Hyperactive started as a hyperparameter optimization package, but it has been generalized to solve expensive gradient-free optimization problems. It uses the [Gradient-Free-Optimizers](https://github.com/SimonBlanke/Gradient-Free-Optimizers) package as an optimization-backend and expands on it with additional features and tools.\n\n\n\u003cbr\u003e\n\n---\n\n\u003cdiv align=\"center\"\u003e\u003ca name=\"menu\"\u003e\u003c/a\u003e\n  \u003ch3\u003e\n    \u003ca href=\"https://github.com/SimonBlanke/Hyperactive#overview\"\u003eOverview\u003c/a\u003e •\n    \u003ca href=\"https://github.com/SimonBlanke/Hyperactive#installation\"\u003eInstallation\u003c/a\u003e •\n    \u003ca href=\"https://simonblanke.github.io/hyperactive-documentation/4.5/\"\u003eAPI reference\u003c/a\u003e •\n    \u003ca href=\"https://github.com/SimonBlanke/Hyperactive#roadmap\"\u003eRoadmap\u003c/a\u003e •\n    \u003ca href=\"https://github.com/SimonBlanke/Hyperactive#citing-hyperactive\"\u003eCitation\u003c/a\u003e •\n    \u003ca href=\"https://github.com/SimonBlanke/Hyperactive#license\"\u003eLicense\u003c/a\u003e\n  \u003c/h3\u003e\n\u003c/div\u003e\n\n---\n\n\u003cbr\u003e\n\n\n## Overview\n\n\u003ch3 align=\"center\"\u003e\nHyperactive features a collection of optimization algorithms that can be used for a variety of optimization problems. The following table shows examples of its capabilities:\n\u003c/h3\u003e\n\n\n\u003cbr\u003e\n\n\u003ctable\u003e\n  \u003ctbody\u003e\n    \u003ctr align=\"center\" valign=\"center\"\u003e\n      \u003ctd\u003e\n        \u003cstrong\u003eOptimization Techniques\u003c/strong\u003e\n        \u003cimg src=\"./docs/images/blue.jpg\"/\u003e\n      \u003c/td\u003e\n      \u003ctd\u003e\n        \u003cstrong\u003eTested and Supported Packages\u003c/strong\u003e\n        \u003cimg src=\"./docs/images/blue.jpg\"/\u003e\n      \u003c/td\u003e\n      \u003ctd\u003e\n        \u003cstrong\u003eOptimization Applications\u003c/strong\u003e\n        \u003cimg src=\"./docs/images/blue.jpg\"/\u003e\n      \u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr/\u003e\n    \u003ctr valign=\"top\"\u003e\n      \u003ctd\u003e\n        \u003ca\u003e\u003cb\u003eLocal Search:\u003c/b\u003e\u003c/a\u003e\n          \u003cul\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_techniques/hill_climbing.py\"\u003eHill Climbing\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_techniques/repulsing_hill_climbing.py\"\u003eRepulsing Hill Climbing\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_techniques/simulated_annealing.py\"\u003eSimulated Annealing\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_techniques/downhill_simplex.py\"\u003eDownhill Simplex Optimizer\u003c/a\u003e\u003c/li\u003e\n         \u003c/ul\u003e\u003cbr\u003e\n        \u003ca\u003e\u003cb\u003eGlobal Search:\u003c/b\u003e\u003c/a\u003e\n          \u003cul\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_techniques/random_search.py\"\u003eRandom Search\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_techniques/grid_search.py\"\u003eGrid Search\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_techniques/rand_rest_hill_climbing.py\"\u003eRandom Restart Hill Climbing\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_techniques/random_annealing.py\"\u003eRandom Annealing\u003c/a\u003e [\u003ca href=\"#/./overview#experimental-algorithms\"\u003e*\u003c/a\u003e] \u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_techniques/pattern_search.py\"\u003ePowell's Method\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_techniques/powells_method.py\"\u003ePattern Search\u003c/a\u003e\u003c/li\u003e\n         \u003c/ul\u003e\u003cbr\u003e\n        \u003ca\u003e\u003cb\u003ePopulation Methods:\u003c/b\u003e\u003c/a\u003e\n          \u003cul\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_techniques/parallel_tempering.py\"\u003eParallel Tempering\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_techniques/particle_swarm_optimization.py\"\u003eParticle Swarm Optimizer\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_techniques/spiral_optimization.py\"\u003eSpiral Optimization\u003c/li\u003e\n            \u003cli\u003eGenetic Algorithm\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_techniques/evolution_strategy.py\"\u003eEvolution Strategy\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003eDifferential Evolution\u003c/a\u003e\u003c/li\u003e\n          \u003c/ul\u003e\u003cbr\u003e\n        \u003ca\u003e\u003cb\u003eSequential Methods:\u003c/b\u003e\u003c/a\u003e\n          \u003cul\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_techniques/bayesian_optimization.py\"\u003eBayesian Optimization\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_techniques/lipschitz_optimization.py\"\u003eLipschitz Optimization\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_techniques/direct_algorithm.py\"\u003eDirect Algorithm\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_techniques/tpe.py\"\u003eTree of Parzen Estimators\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_techniques/forest_optimization.py\"\u003eForest Optimizer\u003c/a\u003e\n            [\u003ca href=\"#/./overview#references\"\u003edto\u003c/a\u003e] \u003c/li\u003e\n          \u003c/ul\u003e\n      \u003c/td\u003e\n      \u003ctd\u003e\n        \u003ca\u003e\u003cb\u003eMachine Learning:\u003c/b\u003e\u003c/a\u003e\n          \u003cul\u003e\n              \u003cli\u003e\u003ca href=\"./examples/tested_and_supported_packages/sklearn_example.py\"\u003eScikit-learn\u003c/a\u003e\u003c/li\u003e\n              \u003cli\u003e\u003ca href=\"./examples/tested_and_supported_packages/xgboost_example.py\"\u003eXGBoost\u003c/a\u003e\u003c/li\u003e\n              \u003cli\u003e\u003ca href=\"./examples/tested_and_supported_packages/lightgbm_example.py\"\u003eLightGBM\u003c/a\u003e\u003c/li\u003e\n              \u003cli\u003e\u003ca href=\"./examples/tested_and_supported_packages/catboost_example.py\"\u003eCatBoost\u003c/a\u003e\u003c/li\u003e\n              \u003cli\u003e\u003ca href=\"./examples/tested_and_supported_packages/rgf_example.py\"\u003eRGF\u003c/a\u003e\u003c/li\u003e\n              \u003cli\u003e\u003ca href=\"./examples/tested_and_supported_packages/mlxtend_example.py\"\u003eMlxtend\u003c/a\u003e\u003c/li\u003e\n          \u003c/ul\u003e\u003cbr\u003e\n        \u003ca\u003e\u003cb\u003eDeep Learning:\u003c/b\u003e\u003c/a\u003e\n          \u003cul\u003e\n              \u003cli\u003e\u003ca href=\"./examples/tested_and_supported_packages/tensorflow_example.py\"\u003eTensorflow\u003c/a\u003e\u003c/li\u003e\n              \u003cli\u003e\u003ca href=\"./examples/tested_and_supported_packages/keras_example.py\"\u003eKeras\u003c/a\u003e\u003c/li\u003e\n              \u003cli\u003e\u003ca href=\"./examples/tested_and_supported_packages/pytorch_example.py\"\u003ePytorch\u003c/a\u003e\u003c/li\u003e\n          \u003c/ul\u003e\u003cbr\u003e\n        \u003ca\u003e\u003cb\u003eParallel Computing:\u003c/b\u003e\u003c/a\u003e\n          \u003cul\u003e\n              \u003cli\u003e\u003ca href=\"./examples/tested_and_supported_packages/multiprocessing_example.py\"\u003eMultiprocessing\u003c/a\u003e\u003c/li\u003e\n              \u003cli\u003e\u003ca href=\"./examples/tested_and_supported_packages/joblib_example.py\"\u003eJoblib\u003c/a\u003e\u003c/li\u003e\n              \u003cli\u003ePathos\u003c/li\u003e\n          \u003c/ul\u003e\n      \u003c/td\u003e\n      \u003ctd\u003e\n        \u003ca\u003e\u003cb\u003eFeature Engineering:\u003c/b\u003e\u003c/a\u003e\n          \u003cul\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_applications/feature_transformation.py\"\u003eFeature Transformation\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_applications/feature_selection.py\"\u003eFeature Selection\u003c/a\u003e\u003c/li\u003e\n          \u003c/ul\u003e\n        \u003ca\u003e\u003cb\u003eMachine Learning:\u003c/b\u003e\u003c/a\u003e\n          \u003cul\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_applications/hyperpara_optimize.py\"\u003eHyperparameter Tuning\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_applications/model_selection.py\"\u003eModel Selection\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_applications/sklearn_pipeline_example.py\"\u003eSklearn Pipelines\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_applications/ensemble_learning_example.py\"\u003eEnsemble Learning\u003c/a\u003e\u003c/li\u003e\n          \u003c/ul\u003e\n        \u003ca\u003e\u003cb\u003eDeep Learning:\u003c/b\u003e\u003c/a\u003e\n          \u003cul\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_applications/neural_architecture_search.py\"\u003eNeural Architecture Search\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_applications/pretrained_nas.py\"\u003ePretrained Neural Architecture Search\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_applications/transfer_learning.py\"\u003eTransfer Learning\u003c/a\u003e\u003c/li\u003e\n          \u003c/ul\u003e\n        \u003ca\u003e\u003cb\u003eData Collection:\u003c/b\u003e\u003c/a\u003e\n          \u003cul\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_applications/meta_data_collection.py\"\u003eSearch Data Collection\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_applications/meta_optimization.py\"\u003eMeta Optimization\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_applications/meta_learning.py\"\u003eMeta Learning\u003c/a\u003e\u003c/li\u003e\n          \u003c/ul\u003e\n        \u003ca\u003e\u003cb\u003eMiscellaneous:\u003c/b\u003e\u003c/a\u003e\n          \u003cul\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_applications/test_function.py\"\u003eTest Functions\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003eFit Gaussian Curves\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_applications/multiple_scores.py\"\u003eManaging multiple objectives\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_applications/search_space_example.py\"\u003eManaging objects in search space\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_applications/constrained_optimization.py\"\u003eConstrained Optimization\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"./examples/optimization_applications/memory.py\"\u003eMemorize evaluations\u003c/a\u003e\u003c/li\u003e\n          \u003c/ul\u003e\n      \u003c/td\u003e\n    \u003c/tr\u003e\n  \u003c/tbody\u003e\n\u003c/table\u003e\n\nThe examples above are not necessarily done with realistic datasets or training procedures. \nThe purpose is fast execution of the solution proposal and giving the user ideas for interesting usecases.\n\n\n\u003cbr\u003e\n\n## Sideprojects and Tools\n\nThe following packages are designed to support Hyperactive and expand its use cases. \n\n| Package                                                                       | Description                                                                          |\n|-------------------------------------------------------------------------------|--------------------------------------------------------------------------------------|\n| [Search-Data-Collector](https://github.com/SimonBlanke/search-data-collector) | Simple tool to save search-data during or after the optimization run into csv-files. |\n| [Search-Data-Explorer](https://github.com/SimonBlanke/search-data-explorer)   | Visualize search-data with plotly inside a streamlit dashboard.\n\nIf you want news about Hyperactive and related projects you can follow me on [twitter](https://twitter.com/blanke_simon).\n\n\n\u003cbr\u003e\n\n## Notebooks and Tutorials\n\n- [Introduction to Hyperactive](https://nbviewer.org/github/SimonBlanke/hyperactive-tutorial/blob/main/notebooks/hyperactive_tutorial.ipynb)\n\n\n\u003cbr\u003e\n\n## Installation\n\nThe most recent version of Hyperactive is available on PyPi:\n\n[![pyversions](https://img.shields.io/pypi/pyversions/hyperactive.svg?style=for-the-badge\u0026logo=python\u0026color=blue\u0026logoColor=white)](https://pypi.org/project/hyperactive)\n[![PyPI version](https://img.shields.io/pypi/v/hyperactive?style=for-the-badge\u0026logo=pypi\u0026color=green\u0026logoColor=white)](https://pypi.org/project/hyperactive/)\n[![PyPI version](https://img.shields.io/pypi/dm/hyperactive?style=for-the-badge\u0026color=red)](https://pypi.org/project/hyperactive/)\n\n```console\npip install hyperactive\n```\n\n\n\n\u003cbr\u003e\n\n## Example\n\n```python\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.ensemble import GradientBoostingRegressor\nfrom sklearn.datasets import load_diabetes\nfrom hyperactive import Hyperactive\n\ndata = load_diabetes()\nX, y = data.data, data.target\n\n# define the model in a function\ndef model(opt):\n    # pass the suggested parameter to the machine learning model\n    gbr = GradientBoostingRegressor(\n        n_estimators=opt[\"n_estimators\"], max_depth=opt[\"max_depth\"]\n    )\n    scores = cross_val_score(gbr, X, y, cv=4)\n\n    # return a single numerical value\n    return scores.mean()\n\n# search space determines the ranges of parameters you want the optimizer to search through\nsearch_space = {\n    \"n_estimators\": list(range(10, 150, 5)),\n    \"max_depth\": list(range(2, 12)),\n}\n\n# start the optimization run\nhyper = Hyperactive()\nhyper.add_search(model, search_space, n_iter=50)\nhyper.run()\n\n```\n\n\u003cbr\u003e\n\n## Hyperactive API reference\n\n\n\u003cbr\u003e\n\n### Basic Usage\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003e Hyperactive(verbosity, distribution, n_processes)\u003c/b\u003e\u003c/summary\u003e\n\n- verbosity = [\"progress_bar\", \"print_results\", \"print_times\"]\n  - Possible parameter types: (list, False)\n  - The verbosity list determines what part of the optimization information will be printed in the command line.\n\n- distribution = \"multiprocessing\"\n  - Possible parameter types: (\"multiprocessing\", \"joblib\", \"pathos\")\n  - Determine, which distribution service you want to use. Each library uses different packages to pickle objects:\n    - multiprocessing uses pickle\n    - joblib uses dill\n    - pathos uses cloudpickle\n  \n      \n- n_processes = \"auto\",   \n  - Possible parameter types: (str, int)\n  - The maximum number of processes that are allowed to run simultaneously. If n_processes is of int-type there will only run n_processes-number of jobs simultaneously instead of all at once. So if n_processes=10 and n_jobs_total=35, then the schedule would look like this 10 - 10 - 10 - 5. This saves computational resources if there is a large number of n_jobs. If \"auto\", then n_processes is the sum of all n_jobs (from .add_search(...)).\n\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003e .add_search(objective_function, search_space, n_iter, optimizer, n_jobs, initialize, pass_through, callbacks, catch, max_score, early_stopping, random_state, memory, memory_warm_start)\u003c/b\u003e\u003c/summary\u003e\n\n\n- objective_function\n  - Possible parameter types: (callable)\n  - The objective function defines the optimization problem. The optimization algorithm will try to maximize the numerical value that is returned by the objective function by trying out different parameters from the search space.\n\n\n- search_space\n  - Possible parameter types: (dict)\n  - Defines the space were the optimization algorithm can search for the best parameters for the given objective function.\n\n\n- n_iter\n  - Possible parameter types: (int)\n  - The number of iterations that will be performed during the optimization run. The entire iteration consists of the optimization-step, which decides the next parameter that will be evaluated and the evaluation-step, which will run the objective function with the chosen parameter and return the score.\n\n\n- optimizer = \"default\"\n  - Possible parameter types: (\"default\", initialized optimizer object)\n  - Instance of optimization class that can be imported from Hyperactive. \"default\" corresponds to the random search optimizer. The imported optimization classes from hyperactive are different from gfo. They only accept optimizer-specific-parameters. The following classes can be imported and used:\n  \n    - HillClimbingOptimizer\n    - StochasticHillClimbingOptimizer\n    - RepulsingHillClimbingOptimizer\n    - SimulatedAnnealingOptimizer\n    - DownhillSimplexOptimizer\n    - RandomSearchOptimizer\n    - GridSearchOptimizer\n    - RandomRestartHillClimbingOptimizer\n    - RandomAnnealingOptimizer\n    - PowellsMethod\n    - PatternSearch\n    - ParallelTemperingOptimizer\n    - ParticleSwarmOptimizer\n    - SpiralOptimization\n    - GeneticAlgorithmOptimizer\n    - EvolutionStrategyOptimizer\n    - DifferentialEvolutionOptimizer\n    - BayesianOptimizer\n    - LipschitzOptimizer\n    - DirectAlgorithm\n    - TreeStructuredParzenEstimators\n    - ForestOptimizer\n    \n  - Example:\n    ```python\n    ...\n    \n    opt_hco = HillClimbingOptimizer(epsilon=0.08)\n    hyper = Hyperactive()\n    hyper.add_search(..., optimizer=opt_hco)\n    hyper.run()\n    \n    ...\n    ```\n\n\n- n_jobs = 1\n  - Possible parameter types: (int)\n  - Number of jobs to run in parallel. Those jobs are optimization runs that work independent from another (no information sharing). If n_jobs == -1 the maximum available number of cpu cores is used.\n\n\n- initialize = {\"grid\": 4, \"random\": 2, \"vertices\": 4}\n  - Possible parameter types: (dict)\n  - The initialization dictionary automatically determines a number of parameters that will be evaluated in the first n iterations (n is the sum of the values in initialize). The initialize keywords are the following:\n    - grid\n      - Initializes positions in a grid like pattern. Positions that cannot be put into a grid are randomly positioned. For very high dimensional search spaces (\u003e30) this pattern becomes random.\n    - vertices\n      - Initializes positions at the vertices of the search space. Positions that cannot be put into a new vertex are randomly positioned.\n\n    - random\n      - Number of random initialized positions\n\n    - warm_start\n      - List of parameter dictionaries that marks additional start points for the optimization run.\n  \n    Example:\n    ```python\n    ... \n    search_space = {\n        \"x1\": list(range(10, 150, 5)),\n        \"x2\": list(range(2, 12)),\n    }\n\n    ws1 = {\"x1\": 10, \"x2\": 2}\n    ws2 = {\"x1\": 15, \"x2\": 10}\n\n    hyper = Hyperactive()\n    hyper.add_search(\n        model,\n        search_space,\n        n_iter=30,\n        initialize={\"grid\": 4, \"random\": 10, \"vertices\": 4, \"warm_start\": [ws1, ws2]},\n    )\n    hyper.run()\n    ```\n\n\n- pass_through = {}\n  - Possible parameter types: (dict)\n  - The pass_through accepts a dictionary that contains information that will be passed to the objective-function argument. This information will not change during the optimization run, unless the user does so by himself (within the objective-function).\n  \n    Example:\n    ```python\n    ... \n    def objective_function(para):\n        para.pass_through[\"stuff1\"] # \u003c--- this variable is 1\n        para.pass_through[\"stuff2\"] # \u003c--- this variable is 2\n\n        score = -para[\"x1\"] * para[\"x1\"]\n        return score\n\n    pass_through = {\n      \"stuff1\": 1,\n      \"stuff2\": 2,\n    }\n\n    hyper = Hyperactive()\n    hyper.add_search(\n        model,\n        search_space,\n        n_iter=30,\n        pass_through=pass_through,\n    )\n    hyper.run()\n    ```\n\n\n- callbacks = {}\n  - Possible parameter types: (dict)\n  - The callbacks enables you to pass functions to hyperactive that are called every iteration during the optimization run. The function has access to the same argument as the objective-function. You can decide if the functions are called before or after the objective-function is evaluated via the keys of the callbacks-dictionary. The values of the dictionary are lists of the callback-functions. The following example should show they way to use callbacks: \n\n\n    Example:\n    ```python\n    ...\n\n    def callback_1(access):\n      # do some stuff\n\n    def callback_2(access):\n      # do some stuff\n\n    def callback_3(access):\n      # do some stuff\n\n    hyper = Hyperactive()\n    hyper.add_search(\n        objective_function,\n        search_space,\n        n_iter=100,\n        callbacks={\n          \"after\": [callback_1, callback_2],\n          \"before\": [callback_3]\n          },\n    )\n    hyper.run()\n    ```\n\n\n- catch = {}\n  - Possible parameter types: (dict)\n  - The catch parameter provides a way to handle exceptions that occur during the evaluation of the objective-function or the callbacks. It is a dictionary that accepts the exception class as a key and the score that is returned instead as the value. This way you can handle multiple types of exceptions and return different scores for each. \n  In the case of an exception it often makes sense to return `np.nan` as a score. You can see an example of this in the following code-snippet:\n\n    Example:\n    ```python\n    ...\n    \n    hyper = Hyperactive()\n    hyper.add_search(\n        objective_function,\n        search_space,\n        n_iter=100,\n        catch={\n          ValueError: np.nan,\n          },\n    )\n    hyper.run()\n    ```\n\n\n- max_score = None\n  - Possible parameter types: (float, None)\n  - Maximum score until the optimization stops. The score will be checked after each completed iteration.\n\n\n- early_stopping=None\n  - (dict, None)\n  - Stops the optimization run early if it did not achive any score-improvement within the last iterations. The early_stopping-parameter enables to set three parameters:\n    - `n_iter_no_change`: Non-optional int-parameter. This marks the last n iterations to look for an improvement over the iterations that came before n. If the best score of the entire run is within those last n iterations the run will continue (until other stopping criteria are met), otherwise the run will stop.\n    - `tol_abs`: Optional float-paramter. The score must have improved at least this absolute tolerance in the last n iterations over the best score in the iterations before n. This is an absolute value, so 0.1 means an imporvement of 0.8 -\u003e 0.9 is acceptable but 0.81 -\u003e 0.9 would stop the run.\n    - `tol_rel`: Optional float-paramter. The score must have imporved at least this relative tolerance (in percentage) in the last n iterations over the best score in the iterations before n. This is a relative value, so 10 means an imporvement of 0.8 -\u003e 0.88 is acceptable but 0.8 -\u003e 0.87 would stop the run.\n\n  - random_state = None\n  - Possible parameter types: (int, None)\n  - Random state for random processes in the random, numpy and scipy module.\n\n\n- memory = \"share\"\n  - Possible parameter types: (bool, \"share\")\n  - Whether or not to use the \"memory\"-feature. The memory is a dictionary, which gets filled with parameters and scores during the optimization run. If the optimizer encounters a parameter that is already in the dictionary it just extracts the score instead of reevaluating the objective function (which can take a long time). If memory is set to \"share\" and there are multiple jobs for the same objective function then the memory dictionary is automatically shared between the different processes.\n\n- memory_warm_start = None\n  - Possible parameter types: (pandas dataframe, None)\n  - Pandas dataframe that contains score and parameter information that will be automatically loaded into the memory-dictionary.\n\n      example:\n\n      \u003ctable class=\"table\"\u003e\n        \u003cthead class=\"table-head\"\u003e\n          \u003ctr class=\"row\"\u003e\n            \u003ctd class=\"cell\"\u003escore\u003c/td\u003e\n            \u003ctd class=\"cell\"\u003ex1\u003c/td\u003e\n            \u003ctd class=\"cell\"\u003ex2\u003c/td\u003e\n            \u003ctd class=\"cell\"\u003ex...\u003c/td\u003e\n          \u003c/tr\u003e\n        \u003c/thead\u003e\n        \u003ctbody class=\"table-body\"\u003e\n          \u003ctr class=\"row\"\u003e\n            \u003ctd class=\"cell\"\u003e0.756\u003c/td\u003e\n            \u003ctd class=\"cell\"\u003e0.1\u003c/td\u003e\n            \u003ctd class=\"cell\"\u003e0.2\u003c/td\u003e\n            \u003ctd class=\"cell\"\u003e...\u003c/td\u003e\n          \u003c/tr\u003e\n          \u003ctr class=\"row\"\u003e\n            \u003ctd class=\"cell\"\u003e0.823\u003c/td\u003e\n            \u003ctd class=\"cell\"\u003e0.3\u003c/td\u003e\n            \u003ctd class=\"cell\"\u003e0.1\u003c/td\u003e\n            \u003ctd class=\"cell\"\u003e...\u003c/td\u003e\n          \u003c/tr\u003e\n          \u003ctr class=\"row\"\u003e\n            \u003ctd class=\"cell\"\u003e...\u003c/td\u003e\n            \u003ctd class=\"cell\"\u003e...\u003c/td\u003e\n            \u003ctd class=\"cell\"\u003e...\u003c/td\u003e\n            \u003ctd class=\"cell\"\u003e...\u003c/td\u003e\n          \u003c/tr\u003e\n          \u003ctr class=\"row\"\u003e\n            \u003ctd class=\"cell\"\u003e...\u003c/td\u003e\n            \u003ctd class=\"cell\"\u003e...\u003c/td\u003e\n            \u003ctd class=\"cell\"\u003e...\u003c/td\u003e\n            \u003ctd class=\"cell\"\u003e...\u003c/td\u003e\n          \u003c/tr\u003e\n        \u003c/tbody\u003e\n      \u003c/table\u003e\n  \n  \n\u003c/details\u003e\n\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003e .run(max_time)\u003c/b\u003e\u003c/summary\u003e\n\n- max_time = None\n  - Possible parameter types: (float, None)\n  - Maximum number of seconds until the optimization stops. The time will be checked after each completed iteration.\n\n\u003c/details\u003e\n\n\n\n\u003cbr\u003e\n\n### Special Parameters\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003e Objective Function\u003c/b\u003e\u003c/summary\u003e\n\nEach iteration consists of two steps:\n - The optimization step: decides what position in the search space (parameter set) to evaluate next \n - The evaluation step: calls the objective function, which returns the score for the given position in the search space\n  \nThe objective function has one argument that is often called \"para\", \"params\", \"opt\" or \"access\".\nThis argument is your access to the parameter set that the optimizer has selected in the\ncorresponding iteration. \n\n```python\ndef objective_function(opt):\n    # get x1 and x2 from the argument \"opt\"\n    x1 = opt[\"x1\"]\n    x2 = opt[\"x2\"]\n\n    # calculate the score with the parameter set\n    score = -(x1 * x1 + x2 * x2)\n\n    # return the score\n    return score\n```\n\nThe objective function always needs a score, which shows how \"good\" or \"bad\" the current parameter set is. But you can also return some additional information with a dictionary:\n\n```python\ndef objective_function(opt):\n    x1 = opt[\"x1\"]\n    x2 = opt[\"x2\"]\n\n    score = -(x1 * x1 + x2 * x2)\n\n    other_info = {\n      \"x1 squared\" : x1**2,\n      \"x2 squared\" : x2**2,\n    }\n\n    return score, other_info\n```\n\nWhen you take a look at the results (a pandas dataframe with all iteration information) after the run has ended you will see the additional information in it. The reason we need a dictionary for this is because Hyperactive needs to know the names of the additonal parameters. The score does not need that, because it is always called \"score\" in the results. You can run [this example script](https://github.com/SimonBlanke/Hyperactive/blob/master/examples/optimization_applications/multiple_scores.py) if you want to give it a try.\n\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003e Search Space Dictionary\u003c/b\u003e\u003c/summary\u003e\n\nThe search space defines what values the optimizer can select during the search. These selected values will be inside the objective function argument and can be accessed like in a dictionary. The values in each search space dimension should always be in a list. If you use np.arange you should put it in a list afterwards:\n\n```python\nsearch_space = {\n    \"x1\": list(np.arange(-100, 101, 1)),\n    \"x2\": list(np.arange(-100, 101, 1)),\n}\n```\n\nA special feature of Hyperactive is shown in the next example. You can put not just numeric values into the search space dimensions, but also strings and functions. This enables a very high flexibility in how you can create your studies.\n\n```python\ndef func1():\n  # do stuff\n  return stuff\n  \n\ndef func2():\n  # do stuff\n  return stuff\n\n\nsearch_space = {\n    \"x\": list(np.arange(-100, 101, 1)),\n    \"str\": [\"a string\", \"another string\"],\n    \"function\" : [func1, func2],\n}\n```\n\nIf you want to put other types of variables (like numpy arrays, pandas dataframes, lists, ...) into the search space you can do that via functions:\n\n```python\ndef array1():\n  return np.array([1, 2, 3])\n  \n\ndef array2():\n  return np.array([3, 2, 1])\n\n\nsearch_space = {\n    \"x\": list(np.arange(-100, 101, 1)),\n    \"str\": [\"a string\", \"another string\"],\n    \"numpy_array\" : [array1, array2],\n}\n```\n\nThe functions contain the numpy arrays and returns them. This way you can use them inside the objective function. \n\n\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003e Optimizer Classes\u003c/b\u003e\u003c/summary\u003e\n\nEach of the following optimizer classes can be initialized and passed to the \"add_search\"-method via the \"optimizer\"-argument. During this initialization the optimizer class accepts **only optimizer-specific-paramters** (no random_state, initialize, ... ):\n  \n  ```python\n  optimizer = HillClimbingOptimizer(epsilon=0.1, distribution=\"laplace\", n_neighbours=4)\n  ```\n  \n  for the default parameters you can just write:\n  \n  ```python\n  optimizer = HillClimbingOptimizer()\n  ```\n  \n  and pass it to Hyperactive:\n  \n  ```python\n  hyper = Hyperactive()\n  hyper.add_search(model, search_space, optimizer=optimizer, n_iter=100)\n  hyper.run()\n  ```\n  \n  So the optimizer-classes are **different** from Gradient-Free-Optimizers. A more detailed explanation of the optimization-algorithms and the optimizer-specific-paramters can be found in the [Optimization Tutorial](https://github.com/SimonBlanke/optimization-tutorial).\n\n- HillClimbingOptimizer\n- RepulsingHillClimbingOptimizer\n- SimulatedAnnealingOptimizer\n- DownhillSimplexOptimizer\n- RandomSearchOptimizer\n- GridSearchOptimizer\n- RandomRestartHillClimbingOptimizer\n- RandomAnnealingOptimizer\n- PowellsMethod\n- PatternSearch\n- ParallelTemperingOptimizer\n- ParticleSwarmOptimizer\n- GeneticAlgorithmOptimizer\n- EvolutionStrategyOptimizer\n- DifferentialEvolutionOptimizer\n- BayesianOptimizer\n- TreeStructuredParzenEstimators\n- ForestOptimizer\n\n\u003c/details\u003e\n\n\n\n\u003cbr\u003e\n\n### Result Attributes\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003e .best_para(objective_function)\u003c/b\u003e\u003c/summary\u003e\n\n- objective_function\n  - (callable)\n- returnes: dictionary\n- Parameter dictionary of the best score of the given objective_function found in the previous optimization run.\n\n  example:\n  ```python\n  {\n    'x1': 0.2, \n    'x2': 0.3,\n  }\n  ```\n  \n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003e .best_score(objective_function)\u003c/b\u003e\u003c/summary\u003e\n\n- objective_function\n  - (callable)\n- returns: int or float\n- Numerical value of the best score of the given objective_function found in the previous optimization run.\n\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003e .search_data(objective_function, times=False)\u003c/b\u003e\u003c/summary\u003e\n\n- objective_function\n  - (callable)\n- returns: Pandas dataframe \n- The dataframe contains score and parameter information of the given objective_function found in the optimization run. If the parameter `times` is set to True the evaluation- and iteration- times are added to the dataframe. \n\n    example:\n\n    \u003ctable class=\"table\"\u003e\n      \u003cthead class=\"table-head\"\u003e\n        \u003ctr class=\"row\"\u003e\n          \u003ctd class=\"cell\"\u003escore\u003c/td\u003e\n          \u003ctd class=\"cell\"\u003ex1\u003c/td\u003e\n          \u003ctd class=\"cell\"\u003ex2\u003c/td\u003e\n          \u003ctd class=\"cell\"\u003ex...\u003c/td\u003e\n        \u003c/tr\u003e\n      \u003c/thead\u003e\n      \u003ctbody class=\"table-body\"\u003e\n        \u003ctr class=\"row\"\u003e\n          \u003ctd class=\"cell\"\u003e0.756\u003c/td\u003e\n          \u003ctd class=\"cell\"\u003e0.1\u003c/td\u003e\n          \u003ctd class=\"cell\"\u003e0.2\u003c/td\u003e\n          \u003ctd class=\"cell\"\u003e...\u003c/td\u003e\n        \u003c/tr\u003e\n        \u003ctr class=\"row\"\u003e\n          \u003ctd class=\"cell\"\u003e0.823\u003c/td\u003e\n          \u003ctd class=\"cell\"\u003e0.3\u003c/td\u003e\n          \u003ctd class=\"cell\"\u003e0.1\u003c/td\u003e\n          \u003ctd class=\"cell\"\u003e...\u003c/td\u003e\n        \u003c/tr\u003e\n        \u003ctr class=\"row\"\u003e\n          \u003ctd class=\"cell\"\u003e...\u003c/td\u003e\n          \u003ctd class=\"cell\"\u003e...\u003c/td\u003e\n          \u003ctd class=\"cell\"\u003e...\u003c/td\u003e\n          \u003ctd class=\"cell\"\u003e...\u003c/td\u003e\n        \u003c/tr\u003e\n        \u003ctr class=\"row\"\u003e\n          \u003ctd class=\"cell\"\u003e...\u003c/td\u003e\n          \u003ctd class=\"cell\"\u003e...\u003c/td\u003e\n          \u003ctd class=\"cell\"\u003e...\u003c/td\u003e\n          \u003ctd class=\"cell\"\u003e...\u003c/td\u003e\n        \u003c/tr\u003e\n      \u003c/tbody\u003e\n    \u003c/table\u003e\n\n\u003c/details\u003e\n\n\n\n\n\u003cbr\u003e\n\n## Roadmap\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003ev2.0.0\u003c/b\u003e :heavy_check_mark:\u003c/summary\u003e\n\n  - [x] Change API\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003ev2.1.0\u003c/b\u003e :heavy_check_mark:\u003c/summary\u003e\n\n  - [x] Save memory of evaluations for later runs (long term memory)\n  - [x] Warm start sequence based optimizers with long term memory\n  - [x] Gaussian process regressors from various packages (gpy, sklearn, GPflow, ...) via wrapper\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003ev2.2.0\u003c/b\u003e :heavy_check_mark:\u003c/summary\u003e\n\n  - [x] Add basic dataset meta-features to long term memory\n  - [x] Add helper-functions for memory\n      - [x] connect two different model/dataset hashes\n      - [x] split two different model/dataset hashes\n      - [x] delete memory of model/dataset\n      - [x] return best known model for dataset\n      - [x] return search space for best model\n      - [x] return best parameter for best model\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003ev2.3.0\u003c/b\u003e :heavy_check_mark:\u003c/summary\u003e\n\n  - [x] Tree-structured Parzen Estimator\n  - [x] Decision Tree Optimizer\n  - [x] add \"max_sample_size\" and \"skip_retrain\" parameter for sbom to decrease optimization time\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003ev3.0.0\u003c/b\u003e :heavy_check_mark:\u003c/summary\u003e\n\n  - [x] New API\n      - [x] expand usage of objective-function\n      - [x] No passing of training data into Hyperactive\n      - [x] Removing \"long term memory\"-support (better to do in separate package)\n      - [x] More intuitive selection of optimization strategies and parameters\n      - [x] Separate optimization algorithms into other package\n      - [x] expand api so that optimizer parameter can be changed at runtime\n      - [x] add extensive testing procedure (similar to Gradient-Free-Optimizers)\n\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003ev3.1.0\u003c/b\u003e :heavy_check_mark:\u003c/summary\u003e\n\n  - [x] Decouple number of runs from active processes (Thanks to [PartiallyTyped](https://github.com/PartiallyTyped))\n\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003ev3.2.0\u003c/b\u003e :heavy_check_mark:\u003c/summary\u003e\n\n  - [x] Dashboard for visualization of search-data at runtime via streamlit (Progress-Board)\n\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003ev3.3.0\u003c/b\u003e :heavy_check_mark:\u003c/summary\u003e\n\n  - [x] Early stopping \n  - [x] Shared memory dictionary between processes with the same objective function\n\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003ev4.0.0\u003c/b\u003e :heavy_check_mark:\u003c/summary\u003e\n\n  - [x] small adjustments to API\n  - [x] move optimization strategies into sub-module \"optimizers\"\n  - [x] preparation for future add ons (long-term-memory, meta-learn, ...) from separate repositories\n  - [x] separate progress board into separate repository\n\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003ev4.1.0\u003c/b\u003e :heavy_check_mark:\u003c/summary\u003e\n\n  - [x] add python 3.9 to testing\n  - [x] add pass_through-parameter\n  - [x] add v1 GFO optimization algorithms\n\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003ev4.2.0\u003c/b\u003e :heavy_check_mark:\u003c/summary\u003e\n\n  - [x] add callbacks-parameter\n  - [x] add catch-parameter\n  - [x] add option to add eval- and iter- times to search-data\n\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003ev4.3.0\u003c/b\u003e :heavy_check_mark:\u003c/summary\u003e\n\n  - [x] add new features from GFO\n    - [x] add Spiral Optimization\n    - [x] add Lipschitz Optimizer\n    - [x] add DIRECT Optimizer\n    - [x] print the random seed for reproducibility\n\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003ev4.4.0\u003c/b\u003e :heavy_check_mark: \u003c/summary\u003e\n\n  - [x] add Optimization-Strategies\n  - [x] redesign progress-bar\n\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003ev4.5.0\u003c/b\u003e :heavy_check_mark: \u003c/summary\u003e\n\n  - [x] add early stopping feature to custom optimization strategies\n  - [x] display additional outputs from objective-function in results in command-line\n  - [x] add type hints to hyperactive-api\n  \n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003ev4.6.0\u003c/b\u003e :heavy_check_mark: \u003c/summary\u003e\n\n  - [x] add support for constrained optimization\n  \n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003ev4.7.0\u003c/b\u003e :heavy_check_mark: \u003c/summary\u003e\n\n  - [x] add Genetic algorithm optimizer\n  - [x] add Differential evolution optimizer\n\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003ev4.8.0\u003c/b\u003e :heavy_check_mark:\u003c/summary\u003e\n\n  - [x] add support for numpy v2\n  - [x] add support for pandas v2\n  - [x] add support for python 3.12\n  - [x] transfer setup.py to pyproject.toml\n  - [x] change project structure to src-layout\n\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003ev4.9.0\u003c/b\u003e \u003c/summary\u003e\n\n  - [ ] add sklearn integration\n\n\u003c/details\u003e\n\n\n\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eFuture releases\u003c/b\u003e \u003c/summary\u003e\n\n  - [ ] new optimization algorithms from [Gradient-Free-Optimizers](https://github.com/SimonBlanke/Gradient-Free-Optimizers) will always be added to Hyperactive\n  - [ ] add \"prune_search_space\"-method to custom optimization strategy class\n  \n\u003c/details\u003e\n\n\n\n\u003cbr\u003e\n\n## FAQ\n\n#### Known Errors + Solutions\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003e Read this before opening a bug-issue \u003c/b\u003e\u003c/summary\u003e\n\n\u003cbr\u003e\n  \n- \u003cb\u003eAre you sure the bug is located in Hyperactive? \u003c/b\u003e\n\n  The error might be located in the optimization-backend. \n  Look at the error message from the command line. \u003cb\u003eIf\u003c/b\u003e one of the last messages look like this:\n     - File \"/.../gradient_free_optimizers/...\", line ...\n\n  \u003cb\u003eThen\u003c/b\u003e you should post the bug report in: \n     - https://github.com/SimonBlanke/Gradient-Free-Optimizers\n\n  \u003cbr\u003eOtherwise\u003c/b\u003e you can post the bug report in Hyperactive\n  \n- \u003cb\u003eDo you have the correct Hyperactive version? \u003c/b\u003e\n  \n  Every major version update (e.g. v2.2 -\u003e v3.0) the API of Hyperactive changes.\n  Check which version of Hyperactive you have. If your major version is older you have two options:\n  \n  \u003cb\u003eRecommended:\u003c/b\u003e You could just update your Hyperactive version with:\n  ```bash\n  pip install hyperactive --upgrade\n  ```\n  This way you can use all the new documentation and examples from the current repository.\n    \n  Or you could continue using the old version and use an old repository branch as documentation.\n  You can do that by selecting the corresponding branch. (top right of the repository. The default is \"master\" or \"main\")\n  So if your major version is older (e.g. v2.1.0) you can select the 2.x.x branch to get the old repository for that version.\n  \n- \u003cb\u003eProvide example code for error reproduction \u003c/b\u003e\n  To understand and fix the issue I need an example code to reproduce the error.\n  I must be able to just copy the code into a py-file and execute it to reproduce the error.\n  \n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e MemoryError: Unable to allocate ... for an array with shape (...) \u003c/summary\u003e\n\n\u003cbr\u003e\n\nThis is expected of the current implementation of smb-optimizers. For all Sequential model based algorithms you have to keep your eyes on the search space size:\n```python\nsearch_space_size = 1\nfor value_ in search_space.values():\n    search_space_size *= len(value_)\n    \nprint(\"search_space_size\", search_space_size)\n```\nReduce the search space size to resolve this error.\n\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e TypeError: cannot pickle '_thread.RLock' object \u003c/summary\u003e\n\n\u003cbr\u003e\n\nThis is because you have classes and/or non-top-level objects in the search space. Pickle (used by multiprocessing) cannot serialize them. Setting distribution to \"joblib\" or \"pathos\" may fix this problem:\n```python\nhyper = Hyperactive(distribution=\"joblib\")\n```\n\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e Command line full of warnings \u003c/summary\u003e\n\n\u003cbr\u003e\n\nVery often warnings from sklearn or numpy. Those warnings do not correlate with bad performance from Hyperactive. Your code will most likely run fine. Those warnings are very difficult to silence.\n\nIt should help to put this at the very top of your script:\n```python\ndef warn(*args, **kwargs):\n    pass\n\n\nimport warnings\n\nwarnings.warn = warn\n```\n\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e Warning: Not enough initial positions for population size \u003c/summary\u003e\n\n\u003cbr\u003e\n  \nThis warning occurs because Hyperactive needs more initial positions to choose from to generate a population for the optimization algorithm:\nThe number of initial positions is determined by the `initialize`-parameter in the `add_search`-method.\n```python\n# This is how it looks per default\ninitialize = {\"grid\": 4, \"random\": 2, \"vertices\": 4}\n  \n# You could set it to this for a maximum population of 20\ninitialize = {\"grid\": 4, \"random\": 12, \"vertices\": 4}\n```\n  \n\u003c/details\u003e\n\n\n\n\u003cbr\u003e\n\n## References\n\n#### [dto] [Scikit-Optimize](https://github.com/scikit-optimize/scikit-optimize/blob/master/skopt/learning/forest.py)\n\n\u003cbr\u003e\n\n## Citing Hyperactive\n\n    @Misc{hyperactive2021,\n      author =   {{Simon Blanke}},\n      title =    {{Hyperactive}: An optimization and data collection toolbox for convenient and fast prototyping of computationally expensive models.},\n      howpublished = {\\url{https://github.com/SimonBlanke}},\n      year = {since 2019}\n    }\n\n\n\u003cbr\u003e\n\n## License\n\n[![LICENSE](https://img.shields.io/github/license/SimonBlanke/Hyperactive?style=for-the-badge)](https://github.com/SimonBlanke/Hyperactive/blob/master/LICENSE)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsimonblanke%2Fhyperactive","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsimonblanke%2Fhyperactive","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsimonblanke%2Fhyperactive/lists"}