{"id":48038971,"url":"https://github.com/auto-flow/ultraopt","last_synced_at":"2026-04-04T14:05:03.210Z","repository":{"id":54636413,"uuid":"321271757","full_name":"auto-flow/ultraopt","owner":"auto-flow","description":"Distributed Asynchronous Hyperparameter Optimization better than HyperOpt. 比HyperOpt更强的分布式异步超参优化库。","archived":false,"fork":false,"pushed_at":"2022-01-22T09:06:59.000Z","size":10779,"stargazers_count":108,"open_issues_count":3,"forks_count":15,"subscribers_count":4,"default_branch":"main","last_synced_at":"2025-12-01T20:55:03.110Z","etag":null,"topics":["automl","bayesian-optimization","blackbox-optimization","hyperopt","hyperparameter-optimization","machine-learning","multi-fidelity","optimization","python"],"latest_commit_sha":null,"homepage":"https://auto-flow.github.io/ultraopt/zh/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"bsd-3-clause","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/auto-flow.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2020-12-14T07:48:49.000Z","updated_at":"2025-06-02T03:46:37.000Z","dependencies_parsed_at":"2022-08-13T22:20:27.028Z","dependency_job_id":null,"html_url":"https://github.com/auto-flow/ultraopt","commit_stats":null,"previous_names":[],"tags_count":1,"template":false,"template_full_name":null,"purl":"pkg:github/auto-flow/ultraopt","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/auto-flow%2Fultraopt","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/auto-flow%2Fultraopt/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/auto-flow%2Fultraopt/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/auto-flow%2Fultraopt/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/auto-flow","download_url":"https://codeload.github.com/auto-flow/ultraopt/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/auto-flow%2Fultraopt/sbom","scorecard":{"id":217166,"data":{"date":"2025-08-11","repo":{"name":"github.com/auto-flow/ultraopt","commit":"6ff221027c4b1b022499d0b7d46b65f18815ada8"},"scorecard":{"version":"v5.2.1-40-gf6ed084d","commit":"f6ed084d17c9236477efd66e5b258b9d4cc7b389"},"score":2.5,"checks":[{"name":"Packaging","score":-1,"reason":"packaging workflow not detected","details":["Warn: no GitHub/GitLab publishing workflow detected."],"documentation":{"short":"Determines if the project is published as a package that others can easily download, install, easily update, and uninstall.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#packaging"}},{"name":"Maintained","score":0,"reason":"0 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 0","details":null,"documentation":{"short":"Determines if the project is \"actively maintained\".","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#maintained"}},{"name":"Dangerous-Workflow","score":-1,"reason":"no workflows found","details":null,"documentation":{"short":"Determines if the project's GitHub Action workflows avoid dangerous patterns.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#dangerous-workflow"}},{"name":"Token-Permissions","score":-1,"reason":"No tokens found","details":null,"documentation":{"short":"Determines if the project's workflows follow the principle of least privilege.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#token-permissions"}},{"name":"Code-Review","score":0,"reason":"Found 0/30 approved changesets -- score normalized to 0","details":null,"documentation":{"short":"Determines if the project requires human code review before pull requests (aka merge requests) are merged.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#code-review"}},{"name":"SAST","score":0,"reason":"no SAST tool detected","details":["Warn: no pull requests merged into dev branch"],"documentation":{"short":"Determines if the project uses static code analysis.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#sast"}},{"name":"Binary-Artifacts","score":10,"reason":"no binaries found in the repo","details":null,"documentation":{"short":"Determines if the project has generated executable (binary) artifacts in the source repository.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#binary-artifacts"}},{"name":"Pinned-Dependencies","score":-1,"reason":"no dependencies found","details":null,"documentation":{"short":"Determines if the project has declared and pinned the dependencies of its build process.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#pinned-dependencies"}},{"name":"CII-Best-Practices","score":0,"reason":"no effort to earn an OpenSSF best practices badge detected","details":null,"documentation":{"short":"Determines if the project has an OpenSSF (formerly CII) Best Practices Badge.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#cii-best-practices"}},{"name":"Security-Policy","score":0,"reason":"security policy file not detected","details":["Warn: no security policy file detected","Warn: no security file to analyze","Warn: no security file to analyze","Warn: no security file to analyze"],"documentation":{"short":"Determines if the project has published a security policy.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#security-policy"}},{"name":"Fuzzing","score":0,"reason":"project is not fuzzed","details":["Warn: no fuzzer integrations found"],"documentation":{"short":"Determines if the project uses fuzzing.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#fuzzing"}},{"name":"License","score":10,"reason":"license file detected","details":["Info: project has a license file: LICENSE:0","Info: FSF or OSI recognized license: BSD 3-Clause \"New\" or \"Revised\" License: LICENSE:0"],"documentation":{"short":"Determines if the project has defined a license.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#license"}},{"name":"Signed-Releases","score":-1,"reason":"no releases found","details":null,"documentation":{"short":"Determines if the project cryptographically signs release artifacts.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#signed-releases"}},{"name":"Branch-Protection","score":0,"reason":"branch protection not enabled on development/release branches","details":["Warn: branch protection not enabled for branch 'main'"],"documentation":{"short":"Determines if the default and release branches are protected with GitHub's branch protection settings.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#branch-protection"}},{"name":"Vulnerabilities","score":6,"reason":"4 existing vulnerabilities detected","details":["Warn: Project is vulnerable to: PYSEC-2020-107 / GHSA-jjw5-xxj6-pcv5","Warn: Project is vulnerable to: PYSEC-2024-110 / GHSA-jw8x-6495-233v","Warn: Project is vulnerable to: PYSEC-2020-108","Warn: Project is vulnerable to: PYSEC-2017-74"],"documentation":{"short":"Determines if the project has open, known unfixed vulnerabilities.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#vulnerabilities"}}]},"last_synced_at":"2025-08-17T01:52:17.853Z","repository_id":54636413,"created_at":"2025-08-17T01:52:17.853Z","updated_at":"2025-08-17T01:52:17.853Z"},"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31402277,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-04T10:20:44.708Z","status":"ssl_error","status_checked_at":"2026-04-04T10:20:06.846Z","response_time":60,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["automl","bayesian-optimization","blackbox-optimization","hyperopt","hyperparameter-optimization","machine-learning","multi-fidelity","optimization","python"],"created_at":"2026-04-04T14:04:58.795Z","updated_at":"2026-04-04T14:05:03.182Z","avatar_url":"https://github.com/auto-flow.png","language":"Python","readme":"\n\n\n\u003cp align=\"center\"\u003e\u003cimg src=\"https://img-blog.csdnimg.cn/20210110141724960.png\"\u003e\u003c/img\u003e\u003c/p\u003e\n\n\n\n[![Build Status](https://travis-ci.org/auto-flow/ultraopt.svg?branch=main)](https://travis-ci.org/auto-flow/ultraopt) \n[![PyPI version](https://badge.fury.io/py/ultraopt.svg?maxAge=2592000)](https://badge.fury.io/py/ultraopt)\n[![Download](https://img.shields.io/pypi/dm/ultraopt.svg)](https://pypi.python.org/pypi/ultraopt)\n![](https://img.shields.io/badge/license-BSD-green)\n![PythonVersion](https://img.shields.io/badge/python-3.6+-blue)\n[![GitHub Star](https://img.shields.io/github/stars/auto-flow/ultraopt.svg)](https://github.com/auto-flow/ultraopt/stargazers) [![GitHub forks](https://img.shields.io/github/forks/auto-flow/ultraopt.svg)](https://github.com/auto-flow/ultraopt/network) [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.4430148.svg)](https://zenodo.org/record/4430148)\n\n`UltraOpt` : **Distributed Asynchronous Hyperparameter Optimization better than HyperOpt**.\n\n---\n\n`UltraOpt` is a simple and efficient library to minimize expensive and noisy black-box functions, it can be used in many fields, such as HyperParameter Optimization(**HPO**) and \nAutomatic Machine Learning(**AutoML**). \n\nAfter absorbing the advantages of existing optimization libraries such as \n[HyperOpt](https://github.com/hyperopt/hyperopt)[\u003csup\u003e[5]\u003c/sup\u003e](#refer-5), [SMAC3](https://github.com/automl/SMAC3)[\u003csup\u003e[3]\u003c/sup\u003e](#refer-3), \n[scikit-optimize](https://github.com/scikit-optimize/scikit-optimize)[\u003csup\u003e[4]\u003c/sup\u003e](#refer-4) and [HpBandSter](https://github.com/automl/HpBandSter)[\u003csup\u003e[2]\u003c/sup\u003e](#refer-2), we develop \n`UltraOpt` , which implement a new bayesian optimization algorithm : Embedding-Tree-Parzen-Estimator(**ETPE**), which is better than HyperOpt' TPE algorithm in our experiments.\nBesides, The optimizer of  `UltraOpt` is redesigned to adapt **HyperBand \u0026 SuccessiveHalving Evaluation Strategies**[\u003csup\u003e[6]\u003c/sup\u003e](#refer-6)[\u003csup\u003e[7]\u003c/sup\u003e](#refer-7) and **MapReduce \u0026 Async Communication Conditions**.\nFinally, you can visualize `Config Space` and `optimization process \u0026 results` by `UltraOpt`'s tool function. Enjoy it !\n\nOther Language: [中文README](README.zh_CN.md)\n\n- **Documentation**\n\n    + English Documentation is not available now.\n\n    + [中文文档](https://auto-flow.github.io/ultraopt/zh/)\n\n- **Tutorials**\n\n    + English Tutorials is not available now.\n\n    + [中文教程](https://github.com/auto-flow/ultraopt/tree/main/tutorials_zh)\n\n**Table of Contents**\n\n- [Installation](#Installation)\n- [Quick Start](#Quick-Start)\n    + [Using UltraOpt in HPO](#Using-UltraOpt-in-HPO)\n    + [Using UltraOpt in AutoML](#Using-UltraOpt-in-AutoML)\n- [Our Advantages](#Our-Advantages)\n    + [Advantage One: ETPE optimizer is more competitive](#Advantage-One-ETPE-optimizer-is-more-competitive)\n    + [Advantage Two: UltraOpt is more adaptable to distributed computing](#Advantage-Two-UltraOpt-is-more-adaptable-to-distributed-computing)\n    + [Advantage Three: UltraOpt is more function comlete and user friendly](#advantage-three-ultraopt-is-more-function-comlete-and-user-friendly)\n- [Citation](#Citation)\n- [Referance](#referance)\n\n# Installation\n\nUltraOpt requires Python 3.6 or higher.\n\nYou can install the latest release by `pip`:\n\n```bash\npip install ultraopt\n```\n\nYou can download the repository and manual installation:\n\n```bash\ngit clone https://github.com/auto-flow/ultraopt.git \u0026\u0026 cd ultraopt\npython setup.py install\n```\n\n# Quick Start\n\n## Using UltraOpt in HPO\n\nLet's learn what `UltraOpt`  doing with several examples (you can try it on your `Jupyter Notebook`). \n\nYou can learn Basic-Tutorial in [here](https://auto-flow.github.io/ultraopt/zh/_tutorials/01._Basic_Tutorial.html), and `HDL`'s Definition in [here](https://auto-flow.github.io/ultraopt/zh/_tutorials/02._Multiple_Parameters.html).\n\nBefore starting a black box optimization task, you need to provide two things:\n\n- parameter domain, or the **Config Space**\n- objective function, accept `config` (`config` is sampled from **Config Space**), return `loss`\n\nLet's define a Random Forest's HPO  **Config Space** by `UltraOpt`'s `HDL` (Hyperparameter Description Language):\n\n```python\nHDL = {\n    \"n_estimators\": {\"_type\": \"int_quniform\",\"_value\": [10, 200, 10], \"_default\": 100},\n    \"criterion\": {\"_type\": \"choice\",\"_value\": [\"gini\", \"entropy\"],\"_default\": \"gini\"},\n    \"max_features\": {\"_type\": \"choice\",\"_value\": [\"sqrt\",\"log2\"],\"_default\": \"sqrt\"},\n    \"min_samples_split\": {\"_type\": \"int_uniform\", \"_value\": [2, 20],\"_default\": 2},\n    \"min_samples_leaf\": {\"_type\": \"int_uniform\", \"_value\": [1, 20],\"_default\": 1},\n    \"bootstrap\": {\"_type\": \"choice\",\"_value\": [True, False],\"_default\": True},\n    \"random_state\": 42\n}\n```\n\nAnd then define an objective function:\n\n```python\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.datasets import load_digits\nfrom sklearn.model_selection import cross_val_score, StratifiedKFold\nfrom ultraopt.hdl import layering_config\nX, y = load_digits(return_X_y=True)\ncv = StratifiedKFold(5, True, 0)\ndef evaluate(config: dict) -\u003e float:\n    model = RandomForestClassifier(**layering_config(config))\n    return 1 - float(cross_val_score(model, X, y, cv=cv).mean())\n```\n\nNow, we can start an optimization process:\n\n```python\nfrom ultraopt import fmin\nresult = fmin(eval_func=evaluate, config_space=HDL, optimizer=\"ETPE\", n_iterations=30)\nresult\n```\n\n```\n100%|██████████| 30/30 [00:36\u003c00:00,  1.23s/trial, best loss: 0.023]\n\n+-----------------------------------+\n| HyperParameters   | Optimal Value |\n+-------------------+---------------+\n| bootstrap         | True:bool     |\n| criterion         | gini          |\n| max_features      | log2          |\n| min_samples_leaf  | 1             |\n| min_samples_split | 2             |\n| n_estimators      | 200           |\n+-------------------+---------------+\n| Optimal Loss      | 0.0228        |\n+-------------------+---------------+\n| Num Configs       | 30            |\n+-------------------+---------------+\n```\n\nFinally, make a simple visualizaiton:\n\n```python\nresult.plot_convergence()\n```\n\n![quickstart1](https://img-blog.csdnimg.cn/20210110141723520.png)\n\nYou can visualize high dimensional interaction by facebook's hiplot:\n\n```python\n!pip install hiplot\nresult.plot_hi(target_name=\"accuracy\", loss2target_func=lambda x:1-x)\n```\n\n![hiplot](https://img-blog.csdnimg.cn/20210110130444272.png)\n\n## Using UltraOpt in AutoML\n\nLet's try a more complex example: solve AutoML's **CASH Problem** [\u003csup\u003e[1]\u003c/sup\u003e](#refer-1) (Combination problem of Algorithm Selection and Hyperparameter optimization) \nby BOHB algorithm[\u003csup\u003e[2]\u003c/sup\u003e](#refer-2) (Combine **HyperBand**[\u003csup\u003e[6]\u003c/sup\u003e](#refer-6) Evaluation Strategies with `UltraOpt`'s **ETPE** optimizer) .\n\nYou can learn Conditional Parameter and complex `HDL`'s Definition in [here](https://auto-flow.github.io/ultraopt/zh/_tutorials/03._Conditional_Parameter.html),  AutoML implementation tutorial in [here](https://auto-flow.github.io/ultraopt/zh/_tutorials/05._Implement_a_Simple_AutoML_System.html) and Multi-Fidelity Optimization in [here](https://auto-flow.github.io/ultraopt/zh/_tutorials/06._Combine_Multi-Fidelity_Optimization.html).\n\nFirst of all, let's define a **CASH** `HDL` :\n\n```python\nHDL = {\n    'classifier(choice)':{\n        \"RandomForestClassifier\": {\n          \"n_estimators\": {\"_type\": \"int_quniform\",\"_value\": [10, 200, 10], \"_default\": 100},\n          \"criterion\": {\"_type\": \"choice\",\"_value\": [\"gini\", \"entropy\"],\"_default\": \"gini\"},\n          \"max_features\": {\"_type\": \"choice\",\"_value\": [\"sqrt\",\"log2\"],\"_default\": \"sqrt\"},\n          \"min_samples_split\": {\"_type\": \"int_uniform\", \"_value\": [2, 20],\"_default\": 2},\n          \"min_samples_leaf\": {\"_type\": \"int_uniform\", \"_value\": [1, 20],\"_default\": 1},\n          \"bootstrap\": {\"_type\": \"choice\",\"_value\": [True, False],\"_default\": True},\n          \"random_state\": 42\n        },\n        \"KNeighborsClassifier\": {\n          \"n_neighbors\": {\"_type\": \"int_loguniform\", \"_value\": [1,100],\"_default\": 3},\n          \"weights\" : {\"_type\": \"choice\", \"_value\": [\"uniform\", \"distance\"],\"_default\": \"uniform\"},\n          \"p\": {\"_type\": \"choice\", \"_value\": [1, 2],\"_default\": 2},\n        },\n    }\n}\n```\n\nAnd then, define a objective function with an additional parameter `budget` to adapt to **HyperBand**[\u003csup\u003e[6]\u003c/sup\u003e](#refer-6) evaluation strategy:\n\n\n\n ```python\nfrom sklearn.neighbors import KNeighborsClassifier\nimport numpy as np\ndef evaluate(config: dict, budget: float) -\u003e float:\n    layered_dict = layering_config(config)\n    AS_HP = layered_dict['classifier'].copy()\n    AS, HP = AS_HP.popitem()\n    ML_model = eval(AS)(**HP)\n    scores = []\n    for i, (train_ix, valid_ix) in enumerate(cv.split(X, y)):\n        rng = np.random.RandomState(i)\n        size = int(train_ix.size * budget)\n        train_ix = rng.choice(train_ix, size, replace=False)\n        X_train,y_train = X[train_ix, :],y[train_ix]\n        X_valid,y_valid = X[valid_ix, :],y[valid_ix]\n        ML_model.fit(X_train, y_train)\n        scores.append(ML_model.score(X_valid, y_valid))\n    score = np.mean(scores)\n    return 1 - score\n```\n\nYou should instance a `multi_fidelity_iter_generator` object for the purpose of using **HyperBand**[\u003csup\u003e[6]\u003c/sup\u003e](#refer-6)  Evaluation Strategy :\n\n```python\nfrom ultraopt.multi_fidelity import HyperBandIterGenerator\nhb = HyperBandIterGenerator(min_budget=1/4, max_budget=1, eta=2)\nhb.get_table()\n```\n\n\n\n\u003ctable border=\"1\" class=\"dataframe\"\u003e\n  \u003cthead\u003e\n    \u003ctr\u003e\n      \u003cth\u003e\u003c/th\u003e\n      \u003cth colspan=\"3\" halign=\"left\"\u003eiter 0\u003c/th\u003e\n      \u003cth colspan=\"2\" halign=\"left\"\u003eiter 1\u003c/th\u003e\n      \u003cth\u003eiter 2\u003c/th\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003cth\u003e\u003c/th\u003e\n      \u003cth\u003estage 0\u003c/th\u003e\n      \u003cth\u003estage 1\u003c/th\u003e\n      \u003cth\u003estage 2\u003c/th\u003e\n      \u003cth\u003estage 0\u003c/th\u003e\n      \u003cth\u003estage 1\u003c/th\u003e\n      \u003cth\u003estage 0\u003c/th\u003e\n    \u003c/tr\u003e\n  \u003c/thead\u003e\n  \u003ctbody\u003e\n    \u003ctr\u003e\n      \u003cth\u003enum_config\u003c/th\u003e\n      \u003ctd\u003e4\u003c/td\u003e\n      \u003ctd\u003e2\u003c/td\u003e\n      \u003ctd\u003e1\u003c/td\u003e\n      \u003ctd\u003e2\u003c/td\u003e\n      \u003ctd\u003e1\u003c/td\u003e\n      \u003ctd\u003e3\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003cth\u003ebudget\u003c/th\u003e\n      \u003ctd\u003e1/4\u003c/td\u003e\n      \u003ctd\u003e1/2\u003c/td\u003e\n      \u003ctd\u003e1\u003c/td\u003e\n      \u003ctd\u003e1/2\u003c/td\u003e\n      \u003ctd\u003e1\u003c/td\u003e\n      \u003ctd\u003e1\u003c/td\u003e\n    \u003c/tr\u003e\n  \u003c/tbody\u003e\n\u003c/table\u003e\n\nlet's combine **HyperBand** Evaluation Strategies with `UltraOpt`'s **ETPE** optimizer , and then start an optimization process:\n\n\n```python\nresult = fmin(eval_func=evaluate, config_space=HDL, \n              optimizer=\"ETPE\", # using bayesian optimizer: ETPE\n              multi_fidelity_iter_generator=hb, # using HyperBand\n              n_jobs=3,         # 3 threads\n              n_iterations=20)\nresult\n```\n\n```\n100%|██████████| 88/88 [00:11\u003c00:00,  7.48trial/s, max budget: 1.0, best loss: 0.012]\n\n+--------------------------------------------------------------------------------------------------------------------------+\n| HyperParameters                                     | Optimal Value                                                      |\n+-----------------------------------------------------+----------------------+----------------------+----------------------+\n| classifier:__choice__                               | KNeighborsClassifier | KNeighborsClassifier | KNeighborsClassifier |\n| classifier:KNeighborsClassifier:n_neighbors         | 4                    | 1                    | 3                    |\n| classifier:KNeighborsClassifier:p                   | 2:int                | 2:int                | 2:int                |\n| classifier:KNeighborsClassifier:weights             | distance             | uniform              | uniform              |\n| classifier:RandomForestClassifier:bootstrap         | -                    | -                    | -                    |\n| classifier:RandomForestClassifier:criterion         | -                    | -                    | -                    |\n| classifier:RandomForestClassifier:max_features      | -                    | -                    | -                    |\n| classifier:RandomForestClassifier:min_samples_leaf  | -                    | -                    | -                    |\n| classifier:RandomForestClassifier:min_samples_split | -                    | -                    | -                    |\n| classifier:RandomForestClassifier:n_estimators      | -                    | -                    | -                    |\n| classifier:RandomForestClassifier:random_state      | -                    | -                    | -                    |\n+-----------------------------------------------------+----------------------+----------------------+----------------------+\n| Budgets                                             | 1/4                  | 1/2                  | 1 (max)              |\n+-----------------------------------------------------+----------------------+----------------------+----------------------+\n| Optimal Loss                                        | 0.0328               | 0.0178               | 0.0122               |\n+-----------------------------------------------------+----------------------+----------------------+----------------------+\n| Num Configs                                         | 28                   | 28                   | 32                   |\n+-----------------------------------------------------+----------------------+----------------------+----------------------+\n```\n\nYou can visualize optimization process in `multi-fidelity` scenarios:\n\n```python\nimport pylab as plt\nplt.rcParams['figure.figsize'] = (16, 12)\nplt.subplot(2, 2, 1)\nresult.plot_convergence_over_time();\nplt.subplot(2, 2, 2)\nresult.plot_concurrent_over_time(num_points=200);\nplt.subplot(2, 2, 3)\nresult.plot_finished_over_time();\nplt.subplot(2, 2, 4)\nresult.plot_correlation_across_budgets();\n```\n\n\n![quickstart2](https://img-blog.csdnimg.cn/20210110141724946.png)\n\n# Our Advantages\n\n## Advantage One: ETPE optimizer is more competitive\n\nWe implement 4 kinds of optimizers(listed in the table below), and `ETPE` optimizer is our original creation, which is proved to be better than other `TPE based optimizers` such as `HyperOpt's TPE` and `HpBandSter's BOHB` in our experiments.\n\nOur experimental code is public available in [here](https://github.com/auto-flow/ultraopt/tree/main/experiments), experimental documentation can be found in [here](https://auto-flow.github.io/ultraopt/zh/experiments.html) .\n\n|Optimizer|Description|\n|-----|---|\n|ETPE| Embedding-Tree-Parzen-Estimator, is our original creation,  converting high-cardinality categorical variables to low-dimension continuous variables based on TPE algorithm, and some other aspects have also been improved, is proved to be better than  `HyperOpt's TPE` in our experiments. |\n|Forest |Bayesian Optimization based on Random Forest. Surrogate model import `scikit-optimize` 's `skopt.learning.forest` model, and integrate Local Search methods in `SMAC3`| .\n|GBRT| Bayesian Optimization based on Gradient Boosting Resgression Tree. Surrogate model import `scikit-optimize` 's `skopt.learning.gbrt` model. |\n|Random| Random Search for baseline or dummy model. |\n\n\nKey result figure in experiment (you can see details in [experimental documentation](https://auto-flow.github.io/ultraopt/zh/experiments.html) ) :\n\n![experiment](https://img-blog.csdnimg.cn/20210110141724952.png)\n\n## Advantage Two: UltraOpt is more adaptable to distributed computing\n\nYou can see this section in the documentation:\n\n- [Asynchronous Communication Parallel Strategy](https://auto-flow.github.io/ultraopt/zh/_tutorials/08._Asynchronous_Communication_Parallel_Strategy.html)\n\n- [MapReduce Parallel Strategy](https://auto-flow.github.io/ultraopt/zh/_tutorials/09._MapReduce_Parallel_Strategy.html)\n\n## Advantage Three: UltraOpt is more function comlete and user friendly\n\nUltraOpt is more function comlete and  user friendly than other optimize library:\n\n\n|                                          | UltraOpt    | HyperOpt    |Scikit-Optimize|SMAC3        |HpBandSter   |\n|------------------------------------------|-------------|-------------|---------------|-------------|-------------|\n|Simple Usage like `fmin` function          |✓ |✓ |✓   |✓ |×|\n|Simple `Config Space` Definition           |✓ |✓ |✓   |×|×|\n|Support Conditional `Config Space`        |✓ |✓ |×  |✓ |✓ |\n|Support Serializable `Config Space`        |✓ |×|×  |×|×|\n|Support Visualizing `Config Space`         |✓ |✓ |×  |×|×|\n|Can Analyse Optimization Process \u0026 Result |✓ |×|✓   |×|✓ |\n|Distributed in Cluster                    |✓ |✓ |×  |×|✓ |\n|Support HyperBand[\u003csup\u003e[6]\u003c/sup\u003e](#refer-6) \u0026 SuccessiveHalving[\u003csup\u003e[7]\u003c/sup\u003e](#refer-7)     |✓ |×|×  |✓ |✓ |\n\n\n\n\n# Citation\n\n```bibtex\n@misc{Tang_UltraOpt,\n    author       = {Qichun Tang},\n    title        = {UltraOpt : Distributed Asynchronous Hyperparameter Optimization better than HyperOpt},\n    month        = January,\n    year         = 2021,\n    doi          = {10.5281/zenodo.4430148},\n    version      = {v0.1.0},\n    publisher    = {Zenodo},\n    url          = {https://doi.org/10.5281/zenodo.4430148}\n}\n```\n\n-----\n\n\u003cb id=\"referance\"\u003eReference\u003c/b\u003e\n\n\n\u003cdiv id=\"refer-1\"\u003e\u003c/div\u003e\n\n[1] [Thornton, Chris et al. “Auto-WEKA: combined selection and hyperparameter optimization of classification algorithms.” Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining (2013): n. pag.](https://arxiv.org/abs/1208.3719)\n\n\u003cdiv id=\"refer-2\"\u003e\u003c/div\u003e\n\n[2] [Falkner, Stefan et al. “BOHB: Robust and Efficient Hyperparameter Optimization at Scale.” ICML (2018).](https://arxiv.org/abs/1807.01774)\n\n\u003cdiv id=\"refer-3\"\u003e\u003c/div\u003e\n\n[3] [Hutter F., Hoos H.H., Leyton-Brown K. (2011) Sequential Model-Based Optimization for General Algorithm Configuration. In: Coello C.A.C. (eds) Learning and Intelligent Optimization. LION 2011. Lecture Notes in Computer Science, vol 6683. Springer, Berlin, Heidelberg.](https://link.springer.com/chapter/10.1007/978-3-642-25566-3_40)\n\n\u003cdiv id=\"refer-4\"\u003e\u003c/div\u003e\n\n[4] https://github.com/scikit-optimize/scikit-optimize\n\n\u003cdiv id=\"refer-5\"\u003e\u003c/div\u003e\n\n[5] [James Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. 2011. Algorithms for hyper-parameter optimization. In Proceedings of the 24th International Conference on Neural Information Processing Systems (NIPS'11). Curran Associates Inc., Red Hook, NY, USA, 2546–2554.](https://dl.acm.org/doi/10.5555/2986459.2986743)\n\n\u003cdiv id=\"refer-6\"\u003e\u003c/div\u003e\n\n[6] [Li, L. et al. “Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization.” J. Mach. Learn. Res. 18 (2017): 185:1-185:52.](https://arxiv.org/abs/1603.06560)\n\n\u003cdiv id=\"refer-7\"\u003e\u003c/div\u003e\n\n[7] [Jamieson, K. and Ameet Talwalkar. “Non-stochastic Best Arm Identification and Hyperparameter Optimization.” AISTATS (2016).](https://arxiv.org/abs/1502.07943)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fauto-flow%2Fultraopt","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fauto-flow%2Fultraopt","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fauto-flow%2Fultraopt/lists"}