{"id":17176290,"url":"https://github.com/evhub/bbopt","last_synced_at":"2025-08-19T22:32:04.472Z","repository":{"id":57414266,"uuid":"102327504","full_name":"evhub/bbopt","owner":"evhub","description":"Black box hyperparameter optimization made easy.","archived":false,"fork":false,"pushed_at":"2023-06-14T19:21:25.000Z","size":1282,"stargazers_count":75,"open_issues_count":6,"forks_count":8,"subscribers_count":7,"default_branch":"master","last_synced_at":"2024-12-15T16:48:39.823Z","etag":null,"topics":["blackbox-optimization","coconut","hyperparameter-optimization","hyperparameter-tuning","python"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/evhub.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2017-09-04T06:41:06.000Z","updated_at":"2024-07-03T05:02:55.000Z","dependencies_parsed_at":"2024-06-20T12:58:11.266Z","dependency_job_id":"1fb85bd9-41c1-4a3e-8adc-7f9145439ebf","html_url":"https://github.com/evhub/bbopt","commit_stats":{"total_commits":303,"total_committers":2,"mean_commits":151.5,"dds":"0.0033003300330033403","last_synced_commit":"9c2148dadeaede6c9651711343a48a2e00f91b76"},"previous_names":[],"tags_count":5,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/evhub%2Fbbopt","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/evhub%2Fbbopt/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/evhub%2Fbbopt/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/evhub%2Fbbopt/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/evhub","download_url":"https://codeload.github.com/evhub/bbopt/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":230374118,"owners_count":18216041,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["blackbox-optimization","coconut","hyperparameter-optimization","hyperparameter-tuning","python"],"created_at":"2024-10-14T23:59:49.566Z","updated_at":"2024-12-19T04:06:32.070Z","avatar_url":"https://github.com/evhub.png","language":"Python","readme":"# BBopt\n\n[![Join the chat at https://gitter.im/evhub/bbopt](https://badges.gitter.im/evhub/bbopt.svg)](https://gitter.im/evhub/bbopt?utm_source=badge\u0026utm_medium=badge\u0026utm_campaign=pr-badge\u0026utm_content=badge)\n[![DOI](https://zenodo.org/badge/102327504.svg)](https://zenodo.org/badge/latestdoi/102327504)\n\nBBopt aims to provide the easiest hyperparameter optimization you'll ever do. Think of BBopt like [Keras](https://keras.io/) (back when Theano was still a thing) for black box optimization: one universal interface for working with any black box optimization backend.\n\nBBopt's features include:\n- a universal API for defining your tunable parameters based on the standard library [`random`](https://docs.python.org/3.6/library/random.html) module (so you don't even have to learn anything new!),\n- tons of state-of-the-art black box optimization algorithms such as Gaussian Processes from [`scikit-optimize`](https://scikit-optimize.github.io/) or Tree Structured Parzen Estimation from [`hyperopt`](http://hyperopt.github.io/hyperopt/) for tuning parameters,\n- the ability to switch algorithms while retaining all previous trials and even dynamically choose the best algorithm for your use case,\n- multiprocessing-safe data saving to enable running multiple trials in parallel,\n- lots of data visualization methods, including support for everything in [`skopt.plots`](https://scikit-optimize.github.io/plots.m.html),\n- support for optimizing over conditional parameters that only appear during some runs,\n- support for all major Python versions (`2.7` or `3.6+`), and\n- a straightforward interface for [extending BBopt with your own custom algorithms](#writing-your-own-backend).\n\nOnce you've defined your parameters, training a black box optimization model on those parameters is as simple as\n```\nbbopt your_file.py\n```\nand serving your file with optimized parameters as easy as\n```python\nimport your_file\n```\n\n_Questions? Head over to [BBopt's Gitter](https://gitter.im/evhub/bbopt) if you have any questions/comments/etc. regarding BBopt._\n\n## Installation\n\nTo get going with BBopt, simply install it with\n```\npip install bbopt\n```\nor, to also install the extra dependencies necessary for running BBopt's examples, run `pip install bbopt[examples]`.\n\n## Basic Usage\n\nTo use bbopt, just add\n```python\n# BBopt setup:\nfrom bbopt import BlackBoxOptimizer\nbb = BlackBoxOptimizer(file=__file__)\nif __name__ == \"__main__\":\n    bb.run()\n```\nto the top of your file, then call a [`random`](https://docs.python.org/3.6/library/random.html) method like\n```python\nx = bb.uniform(\"x\", 0, 1)\n```\nfor each of the tunable parameters in your model, and finally add\n```python\nbb.maximize(y)      or      bb.minimize(y)\n```\nto set the value being optimized. Then, run\n```\nbbopt \u003cyour file here\u003e -n \u003cnumber of trials\u003e -j \u003cnumber of processes\u003e\n```\nto train your model, and just\n```\nimport \u003cyour module here\u003e\n```\nto serve it!\n\n_Note: Neither `__file__` nor `__name__` are available in Jupyter notebooks. In that case, just setup BBopt with:_\n```python\nimport os\n\n# BBopt setup:\nfrom bbopt import BlackBoxOptimizer\nbb = BlackBoxOptimizer(data_dir=os.getcwd(), data_name=\"my_project_name\")\n```\n\n## Examples\n\nSome examples of BBopt in action:\n\n- [`random_example.py`](https://github.com/evhub/bbopt/blob/master/bbopt-source/examples/random_example.py): Extremely basic example using the `random` backend.\n- [`skopt_example.py`](https://github.com/evhub/bbopt/blob/master/bbopt-source/examples/skopt_example.py): Slightly more complex example making use of the `gaussian_process` algorithm from the `scikit-optimize` backend.\n- [`hyperopt_example.py`](https://github.com/evhub/bbopt/blob/master/bbopt-source/examples/hyperopt_example.py): Example showcasing the `tree_structured_parzen_estimator` algorithm from the `hyperopt` backend.\n- [`meta_example.py`](https://github.com/evhub/bbopt/blob/master/bbopt-source/examples/meta_example.py): Example of using **run_meta** to dynamically choose an algorithm.\n- [`numpy_example.py`](https://github.com/evhub/bbopt/blob/master/bbopt-source/examples/numpy_example.py): Example which showcases how to have numpy array parameters.\n- [`conditional_skopt_example.py`](https://github.com/evhub/bbopt/blob/master/bbopt-source/examples/conditional_skopt_example.py): Example of having black box parameters that are dependent on other black box parameters using the `gaussian_process` algorithm from the `scikit-optimize` backend.\n- [`conditional_hyperopt_example.py`](https://github.com/evhub/bbopt/blob/master/bbopt-source/examples/conditional_hyperopt_example.py): Example of doing conditional parameters with the `tree_structured_parzen_estimator` algorithm from the `hyperopt` backend.\n- [`bask_example.py`](https://github.com/evhub/bbopt/blob/master/bbopt-source/examples/bask_example.py): Example of using conditional parameters with a semi-random target using the `bask_gp` algorithm from the `bayes-skopt` backend.\n- [`pysot_example.py`](https://github.com/evhub/bbopt/blob/master/bbopt-source/examples/pysot_example.py): Example of using the full API to implement an optimization loop and avoid the overhead of running the entire file multiple times while making use of the `pySOT` backend.\n- [`keras_example.py`](https://github.com/evhub/bbopt/blob/master/bbopt-source/examples/keras_example.py): Complete example of using BBopt to optimize a neural network built with [Keras](https://keras.io/). Uses the full API to implement its own optimization loop and thus avoid the overhead of running the entire file multiple times.\n- [`any_fast_example.py`](https://github.com/evhub/bbopt/blob/master/bbopt-source/examples/any_fast_example.py): Example of using the default algorithm `\"any_fast\"` to dynamically select a good backend.\n- [`mixture_example.py`](https://github.com/evhub/bbopt/blob/master/bbopt-source/examples/mixture_example.py): Example of using the `mixture` backend to randomly switch between different algorithms.\n- [`json_example.py`](https://github.com/evhub/bbopt/blob/master/bbopt-source/examples/json_example.py): Example of using `json` instead of `pickle` to save parameters.\n\n## Full API\n\n\u003c!-- MarkdownTOC --\u003e\n\n- [BBopt](#bbopt)\n  - [Installation](#installation)\n  - [Basic Usage](#basic-usage)\n  - [Examples](#examples)\n  - [Full API](#full-api)\n    - [Command-Line Interface](#command-line-interface)\n    - [Black Box Optimization Methods](#black-box-optimization-methods)\n      - [Constructor](#constructor)\n      - [`run`](#run)\n      - [`algs`](#algs)\n      - [`run_meta`](#run_meta)\n      - [`run_backend`](#run_backend)\n      - [`minimize`](#minimize)\n      - [`maximize`](#maximize)\n      - [`remember`](#remember)\n      - [`plot_convergence`](#plot_convergence)\n      - [`plot_history`](#plot_history)\n      - [`partial_dependence`](#partial_dependence)\n      - [`plot_partial_dependence_1D`](#plot_partial_dependence_1d)\n      - [`plot_evaluations`](#plot_evaluations)\n      - [`plot_objective`](#plot_objective)\n      - [`plot_regret`](#plot_regret)\n      - [`get_skopt_result`](#get_skopt_result)\n      - [`get_current_run`](#get_current_run)\n      - [`get_best_run`](#get_best_run)\n      - [`get_data`](#get_data)\n      - [`data_file`](#data_file)\n      - [`is_serving`](#is_serving)\n      - [`tell_examples`](#tell_examples)\n      - [`backend`](#backend)\n      - [`run_id`](#run_id)\n    - [Parameter Definition Methods](#parameter-definition-methods)\n      - [`randrange`](#randrange)\n      - [`randint`](#randint)\n      - [`getrandbits`](#getrandbits)\n      - [`choice`](#choice)\n      - [`randbool`](#randbool)\n      - [`sample`](#sample)\n      - [`shuffle`](#shuffle)\n      - [`random`](#random)\n      - [`uniform`](#uniform)\n      - [`loguniform`](#loguniform)\n      - [`normalvariate`](#normalvariate)\n      - [`lognormvariate`](#lognormvariate)\n      - [`rand`](#rand)\n      - [`randn`](#randn)\n      - [`param`](#param)\n    - [Writing Your Own Backend](#writing-your-own-backend)\n\n\u003c!-- /MarkdownTOC --\u003e\n\n### Command-Line Interface\n\nThe `bbopt` command is extremely simple in terms of what it actually does. For the command `bbopt \u003cfile\u003e -n \u003ctrials\u003e -j \u003cprocesses\u003e`, BBopt simply runs `python \u003cfile\u003e` a number of times equal to `\u003ctrials\u003e`, split across `\u003cprocesses\u003e` different processes.\n\nWhy does this work? If you're using the basic boilerplate, then running `python \u003cfile\u003e` will trigger the `if __name__ == \"__main__\":` clause, which will run a training episode. But when you go to `import` your file, the `if __name__ == \"__main__\":` clause won't get triggered, and you'll just get served the best parameters found so far. Since the command-line interface is so simple, advanced users who want to use the full API instead of the boilerplate need not use the `bbopt` command at all. If you want more information on the `bbopt` command, just run `bbopt -h`.\n\n### Black Box Optimization Methods\n\n#### Constructor\n\n**BlackBoxOptimizer**(_file_, *, _tag_=`None`, _protocol_=`None`)\n\n**BlackBoxOptimizer**(_data\\_dir_, _data\\_name_, *, _tag_=`None`, _protocol_=`None`)\n\nCreate a new `bb` object; this should be done at the beginning of your program as all the other functions are methods of this object.\n\n_file_ is used by BBopt to figure out where to load and save data to, and should usually just be set to `__file__`. _tag_ allows additional customization of the BBopt data file for when multiple BBopt instances might be desired for the same file. Specifically, BBopt will save data to `os.path.splitext(file)[0] + \"_\" + tag + extension`.\n\nAlternatively, _data\\_dir_ and _data\\_name_ can be used to specify where to save and load data to. In that case, BBopt will save data to `os.path.join(data_dir, data_name + extension)` if no _tag_ is passed, or `os.path.join(data_dir, data_name + \"_\" + tag + extension)` if a _tag_ is given.\n\n_protocol_ determines how BBopt serializes data. If `None` (the default), BBopt will use pickle protocol 2, which is the highest version that works on both Python 2 and Python 3 (unless a `json` file is present, in which case BBopt will use `json`). To use the newest protocol instead, pass `protocol=-1`. If `protocol=\"json\"`, BBopt will use `json` instead of `pickle`, which is occasionally useful if you want to access your data outside of Python.\n\n#### `run`\n\nBlackBoxOptimizer.**run**(_alg_=`\"any_fast\"`)\n\nStart optimizing using the given black box optimization algorithm. Use **algs** to get the valid values for _alg_.\n\nIf this method is never called, or called with `alg=\"serving\"`, BBopt will just serve the best parameters found so far, which is how the basic boilerplate works. Note that, if no saved parameter data is found, and a _guess_ is present, BBopt will use that, which is a good way of distributing your parameter values without including all your saved parameter data.\n\n#### `algs`\n\nBlackBoxOptimizer.**algs**\n\nA dictionary mapping the valid algorithms for use in **run** to the pair `(backend, kwargs)` of the backend and arguments to that backend that the algorithm corresponds to.\n\nSupported algorithms are:\n- `\"serving\"` (`serving` backend) (used if **run** is never called)\n- `\"random\"` (`random` backend)\n- `\"tree_structured_parzen_estimator\"` (`hyperopt` backend)\n- `\"adaptive_tpe\"` (`hyperopt` backend; but only Python 3+)\n- `\"annealing\"` (`hyperopt` backend)\n- `\"gaussian_process\"` (`scikit-optimize` backend)\n- `\"random_forest\"` (`scikit-optimize` backend)\n- `\"extra_trees\"` (`scikit-optimize` backend)\n- `\"gradient_boosted_regression_trees\"` (`scikit-optimize` backend)\n- `\"bask_gaussian_process\"` (`bayes-skopt` backend)\n- `\"stochastic_radial_basis_function\"` (`pySOT` backend)\n- `\"expected_improvement\"` (`pySOT` backend)\n- `\"DYCORS\"` (`pySOT` backend)\n- `\"lower_confidence_bound\"` (`pySOT` backend)\n- `\"latin_hypercube\"` (`pySOT` backend)\n- `\"symmetric_latin_hypercube\"` (`pySOT` backend)\n- `\"two_factorial\"` (`pySOT` backend)\n- `\"epsilon_max_greedy\"` (`mixture` backend)\n- `\"epsilon_greedy\"` (`bandit` backend)\n- `\"boltzmann_exploration\"` (`bandit` backend)\n- `\"boltzmann_gumbel_exploration\"` (`bandit` backend) (the default _meta\\_alg_ in **run_meta**)\n- `\"openai\"` (`openai` backend)\n\nAdditionally, there are also some algorithms of the form `safe_\u003cother_alg\u003e` which use `mixture` to defer to `\u003cother_alg\u003e` if `\u003cother_alg\u003e` supports the parameter definition functions you're using, otherwise default to a suitable replacement.\n\n**algs** also includes the following pseudo-algorithms which defer to **run_meta**:\n- `\"any_fast\"` (same as calling **run_meta** with a suite of algorithms selected for their speed except that some algorithms are ignored if unsupported parameter definition functions are used, e.g. `normalvariate` for `scikit-optimize`) (used if **run** is called with no args)\n- `\"any_hyperopt\"` (equivalent to calling **run_meta** with all `hyperopt` algorithms)\n- `\"any_skopt\"` (equivalent to calling **run_meta** with all `scikit-optimize` algorithms)\n- `\"any_pysot\"` (equivalent to calling **run_meta** with all `pySOT` algorithms)\n\n_Note: The `bayes-skopt` backend is only available on Python 3.7+ and the `pySOT` and `openai` backends are only available on Python 3+._\n\n#### `run_meta`\n\nBlackBoxOptimizer.**run_meta**(_algs_, _meta\\_alg_=`\"boltzmann_gumbel_exploration\"`)\n\n**run_meta** is a special version of **run** that uses the _meta\\_alg_ algorithm to dynamically pick an algorithm from among the given _algs_. Both _algs_ and _meta\\_alg_ can use any algorithms in **algs**.\n\n#### `run_backend`\n\nBlackBoxOptimizer.**run_backend**(_backend_, *_args_, **_kwargs_)\n\nThe base function behind **run**. Instead of specifying an algorithm, **run_backend** lets you specify the specific backend you want to call and the parameters you want to call it with. Different backends do different things with the remaining arguments:\n\n- `scikit-optimize` passes the arguments to [`skopt.Optimizer`](https://scikit-optimize.github.io/#skopt.Optimizer),\n- `hyperopt` passes the arguments to [`fmin`](https://github.com/hyperopt/hyperopt/wiki/FMin),\n- `mixture` expects a `distribution` argument to specify the mixture of different algorithms to use, specifically a list of `(alg, weight)` tuples (and also admits a `remove_erroring_algs` bool to automatically remove erroring algorithms),\n- `bayes-skopt` passes the arguments to [`bask.Optimizer`](https://github.com/kiudee/bayes-skopt/blob/master/bask/optimizer.py#L35),\n- `pySOT` expects a `strategy` (either a strategy class or one of `\"SRBF\", \"EI\", \"DYCORS\", \"LCB\"`), a `surrogate` (either a surrogate class or one of `\"RBF\", \"GP\"`), and a `design` (either an experimental design class or one of `None, \"latin_hypercube\", \"symmetric_latin_hypercube\", \"two_factorial\"`), and\n- `openai` expects an `engine` (the name of the model to use), `temperature`, `max_retries`, and `api_key` (otherwise uses `OPENAI_API_KEY` env var).\n\n_Note: The `bayes-skopt` backend is only available on Python 3.7+ and the `pySOT` and `openai` backends are only available on Python 3+._\n\n#### `minimize`\n\nBlackBoxOptimizer.**minimize**(_value_)\n\nFinish optimizing and set the loss for this run to _value_. To start another run, call **run** again.\n\n#### `maximize`\n\nBlackBoxOptimizer.**maximize**(_value_)\n\nSame as **minimize** but sets the gain instead of the loss.\n\n#### `remember`\n\nBlackBoxOptimizer.**remember**(_info_)\n\nUpdate the current run's `\"memo\"` field with the given _info_ dictionary. Useful for saving information about a run that shouldn't actually impact optimization but that you would like to have access to later (using **get_best_run**, for example).\n\n#### `plot_convergence`\n\nBlackBoxOptimizer.**plot_convergence**(_ax_=`None`, _yscale_=`None`)\n\nPlot the running best gain/loss over the course of all previous trials. If passed, `ax` should be the [matplotlib axis](https://matplotlib.org/api/axes_api.html) to plot on and `yscale` should be the scale for the y axis.\n\nRun BBopt's [`keras` example](https://github.com/evhub/bbopt/blob/master/bbopt-source/examples/keras_example.py) to generate an example plot.\n\n#### `plot_history`\n\nBlackBoxOptimizer.**plot_history**(_ax_=`None`, _yscale_=`None`)\n\nPlot the gain/loss at each point over the course of all previous trials. If passed, `ax` should be the [matplotlib axis](https://matplotlib.org/api/axes_api.html) to plot on and `yscale` should be the scale for the y axis.\n\nRun BBopt's [`keras` example](https://github.com/evhub/bbopt/blob/master/bbopt-source/examples/keras_example.py) to generate an example plot.\n\n#### `partial_dependence`\n\nBlackBoxOptimizer.**partial_dependence**(_i\\_name_, _j\\_name_=`None`, _sample\\_points_=`None`, _n\\_samples_=`250`, _n\\_points_=`40`)\n\nCalls [`skopt.plots.partial_dependence`](https://scikit-optimize.github.io/stable/modules/generated/skopt.plots.partial_dependence.html) using previous trial data. The parameters _i\\_name_ and _j\\_name_ should be set to names of the parameters you want for the _i_ and _j_ arguments to `skopt.plots.partial_dependence`.\n\n#### `plot_partial_dependence_1D`\n\nBlackBoxOptimizer.**plot_partial_dependence_1D**(_i\\_name_, _ax_=`None`, _yscale_=`None`, _sample\\_points_=`None`, _n\\_samples_=`250`, _n\\_points_=`40`)\n\nPlot the partial dependence of _i\\_name_ on the given [matplotlib axis](https://matplotlib.org/api/axes_api.html) `ax` and with the given y axis scale `yscale`. See **partial_dependence** for the meaning of the other parameters.\n\nRun BBopt's [`keras` example](https://github.com/evhub/bbopt/blob/master/bbopt-source/examples/keras_example.py) to generate an example plot.\n\n#### `plot_evaluations`\n\nBlackBoxOptimizer.**plot_evaluations**(_bins_=`20`)\n\nCalls [`skopt.plots.plot_evaluations`](https://scikit-optimize.github.io/stable/modules/generated/skopt.plots.plot_evaluations.html) using previous trial data.\n\nRun BBopt's [`keras` example](https://github.com/evhub/bbopt/blob/master/bbopt-source/examples/keras_example.py) to generate an example plot.\n\n#### `plot_objective`\n\nBlackBoxOptimizer.**plot_objective**(_levels_=`10`, _n\\_points_=`40`, _n\\_samples_=`250`, _size_=`2`, _zscale_=`\"linear\"`)\n\nCalls [`skopt.plots.plot_objective`](https://scikit-optimize.github.io/stable/modules/generated/skopt.plots.plot_objective.html) using previous trial data.\n\nRun BBopt's [`keras` example](https://github.com/evhub/bbopt/blob/master/bbopt-source/examples/keras_example.py) to generate an example plot.\n\n#### `plot_regret`\n\nBlackBoxOptimizer.**plot_regret**([_ax_, [_true\\_minimum_, [_yscale_]]])\n\nCalls [`skopt.plots.plot_regret`](https://scikit-optimize.github.io/stable/modules/generated/skopt.plots.plot_regret.html) using previous trial data.\n\nRun BBopt's [`keras` example](https://github.com/evhub/bbopt/blob/master/bbopt-source/examples/keras_example.py) to generate an example plot.\n\n#### `get_skopt_result`\n\nBlackBoxOptimizer.**get_skopt_result**()\n\nGets an `OptimizeResult` object usable by [`skopt.plots`](https://scikit-optimize.github.io/stable/modules/classes.html#module-skopt.plots) functions. Allows for arbitrary manipulation of BBopt optimization results in `scikit-optimize` including any plotting functions not natively supported by BBopt.\n\n#### `get_current_run`\n\nBlackBoxOptimizer.**get_current_run**()\n\nGet information on the current run, including the values of all parameters encountered so far and the loss/gain of the run if specified yet.\n\n#### `get_best_run`\n\nBlackBoxOptimizer.**get_best_run**()\n\nGet information on the best run so far. These are the parameters that will be used if **run** is not called.\n\n#### `get_data`\n\nBlackBoxOptimizer.**get_data**(_print\\_data_=`False`)\n\nDump a dictionary containing `\"params\"`—the parameters BBopt knows about and what random function and arguments they were initialized with—and `\"examples\"`—all the previous data BBopt has collected. If _print\\_data_, pretty prints the data in addition to returning it.\n\n#### `data_file`\n\nBlackBoxOptimizer.**data_file**\n\nThe path of the file where BBopt is saving data to.\n\n#### `is_serving`\n\nBlackBoxOptimizer.**is_serving**\n\nWhether BBopt is currently using the `\"serving\"` algorithm.\n\n#### `tell_examples`\n\nBlackBoxOptimizer.**tell_examples**(_examples_)\n\nAdd the given _examples_ as in **get_data** to memory, writing the new data to **data_file**. Must come before **run** if you want the new data to be included in the model for that run.\n\n#### `backend`\n\nBlackBoxOptimizer.**backend**\n\nThe backend object being used by the current BlackBoxOptimizer instance.\n\n#### `run_id`\n\nBlackBoxOptimizer.**run_id**\n\nThe id of the current run if started by the BBopt command-line interface.\n\n### Parameter Definition Methods\n\nEvery BBopt parameter definition method has the form\n```\nbb.\u003crandom function\u003e(\u003cname\u003e, \u003cargs\u003e, **kwargs)\n```\nwhere\n\n- the method itself specifies what distribution is being modeled,\n- the first argument is always _name_, a unique string identifying that parameter,\n- following _name_ are whatever arguments are needed to specify the distribution's parameters, and\n- at the end are keyword arguments, which are the same for all the different methods. The supported _kwargs_ are:\n    + _guess_, which specifies the initial value for the parameter, and\n    + _placeholder\\_when\\_missing_, which specifies what placeholder value a conditional parameter should be given if missing.\n\n_Important note: Once you bind a name to a parameter, you cannot change that parameter's options. Thus, if the options defining your parameters can vary from run to run, you must use a different name for each possible combination._\n\n#### `randrange`\n\nBlackBoxOptimizer.**randrange**(_name_, _stop_, **_kwargs_)\n\nBlackBoxOptimizer.**randrange**(_name_, _start_, _stop_, _step_=`1`, **_kwargs_)\n\nCreate a new parameter modeled by [`random.randrange(start, stop, step)`](https://docs.python.org/3/library/random.html#random.randrange).\n\n_Backends which support **randrange**: `scikit-optimize`, `hyperopt`, `bayes-skopt`, `pySOT`, `openai`, `random`._\n\n#### `randint`\n\nBlackBoxOptimizer.**randint**(_name_, _a_, _b_, **_kwargs_)\n\nCreate a new parameter modeled by [`random.randint(a, b)`](https://docs.python.org/3/library/random.html#random.randint), which is equivalent to `random.randrange(a, b-1)`.\n\n_Backends which support **randint**: `scikit-optimize`, `hyperopt`, `bayes-skopt`, `pySOT`, `openai`, `random`._\n\n#### `getrandbits`\n\nBlackBoxOptimizer.**getrandbits**(_name_, _k_, **_kwargs_)\n\nCreate a new parameter modeled by [`random.getrandbits(k)`](https://docs.python.org/3/library/random.html#random.getrandbits), which is equivalent to `random.randrange(0, 2**k)`.\n\n_Backends which support **getrandbits**: `scikit-optimize`, `hyperopt`, `bayes-skopt`, `pySOT`, `openai`, `random`._\n\n#### `choice`\n\nBlackBoxOptimizer.**choice**(_name_, _seq_, **_kwargs_)\n\nCreate a new parameter modeled by [`random.choice(seq)`](https://docs.python.org/3/library/random.html#random.choice), which chooses an element from _seq_.\n\n_Backends which support **choice**: `scikit-optimize`, `hyperopt`, `bayes-skopt`, `pySOT`, `random`._\n\n#### `randbool`\n\nBlackBoxOptimizer.**randbool**(_name_, **_kwargs_)\n\nCreate a new boolean parameter, modeled by the equivalent of `random.choice([False, True])`.\n\n_Backends which support **randbool**: `scikit-optimize`, `hyperopt`, `bayes-skopt`, `pySOT`, `random`._\n\n#### `sample`\n\nBlackBoxOptimizer.**sample**(_name_, _population_, _k_, **_kwargs_)\n\nCreate a new parameter modeled by [`random.sample(population, k)`](https://docs.python.org/3/library/random.html#random.sample), which chooses _k_ elements from _population_.\n\nBy default, the ordering of elements in the result is random. If random ordering is not important and you're happy to have the same ordering as in _population_, `BlackBoxOptimizer.unshuffled_sample` is recommended instead.\n\n_Backends which support **sample**: `scikit-optimize`, `hyperopt`, `bayes-skopt`, `pySOT`, `random`._\n\n#### `shuffle`\n\nBlackBoxOptimizer.**shuffle**(_name_, _population_, **_kwargs_)\n\nCreate a new parameter modeled by [`random.shuffle(population)`](https://docs.python.org/3/library/random.html#random.shuffle). A version that returns the shuffled list instead of shuffling it in place is also supported as `BlackBoxOptimizer.shuffled`.\n\n_Backends which support **shuffle**: `scikit-optimize`, `hyperopt`, `bayes-skopt`, `pySOT`, `random`._\n\n#### `random`\n\nBlackBoxOptimizer.**random**(_name_, **_kwargs_)\n\nCreate a new parameter modeled by [`random.random()`](https://docs.python.org/3/library/random.html#random.random), which is equivalent to `random.uniform(0, 1)` except that `1` is disallowed.\n\n_Backends which support **random**: `scikit-optimize`, `hyperopt`, `bayes-skopt`, `pySOT`, `openai`, `random`._\n\n#### `uniform`\n\nBlackBoxOptimizer.**uniform**(_name_, _a_, _b_, **_kwargs_)\n\nCreate a new parameter modeled by [`random.uniform(a, b)`](https://docs.python.org/3/library/random.html#random.uniform), which uniformly selects a float in the range \\[_a_, _b_\\].\n\n_Backends which support **uniform**: `scikit-optimize`, `hyperopt`, `bayes-skopt`, `pySOT`, `openai`, `random`._\n\n#### `loguniform`\n\nBlackBoxOptimizer.**loguniform**(_name_, _min\\_val_, _max\\_val_, **_kwargs_)\n\nCreate a new parameter modeled by\n```python\nmath.exp(random.uniform(math.log(min_val), math.log(max_val)))\n```\nwhich logarithmically selects a float between _min\\_val_ and _max\\_val_.\n\n_Backends which support **loguniform**: `scikit-optimize`, `hyperopt`, `bayes-skopt`, `pySOT`, `openai`, `random`._\n\n#### `normalvariate`\n\nBlackBoxOptimizer.**normalvariate**(_name_, _mu_, _sigma_, **_kwargs_)\n\nCreate a new parameter modeled by [`random.normalvariate(mu, sigma)`](https://docs.python.org/3/library/random.html#random.normalvariate).\n\nA shortcut for the standard normal distribution is also available via `BlackBoxOptimizer.stdnormal`.\n\n_Backends which support **normalvariate**: `hyperopt`, `openai`, `random`._\n\n#### `lognormvariate`\n\nBlackBoxOptimizer.**lognormvariate**(_name_, _mu_, _sigma_, **_kwargs_)\n\nCreate a new parameter modeled by [`random.lognormvariate(mu, sigma)`](https://docs.python.org/3/library/random.html#random.lognormvariate) such that the natural log is a normal distribution with mean _mu_ and standard deviation _sigma_.\n\n_Backends which support **lognormvariate**: `hyperopt`, `openai`, `random`._\n\n#### `rand`\n\nBlackBoxOptimizer.**rand**(_name_, *_shape_, **_kwargs_)\n\nCreate a new parameter modeled by [`numpy.random.rand(*shape)`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.rand.html#numpy.random.rand), which creates a `numpy` array of the given shape with entries generated uniformly in `[0, 1)`.\n\n_Backends which support **rand**: `scikit-optimize`, `hyperopt`, `bayes-skopt`, `pySOT`, `openai`, `random`._\n\n#### `randn`\n\nBlackBoxOptimizer.**randn**(_name_, *_shape_, **_kwargs_)\n\nCreate a new parameter modeled by [`numpy.random.randn(*shape)`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.randn.html#numpy-random-randn), which creates a `numpy` array of the given shape with entries generated according to a standard normal distribution.\n\n_Backends which support **randn**: `hyperopt`, `openai`, `random`._\n\n#### `param`\n\nBlackBoxOptimizer.**param**(_name_, _func_, *_args_, **_kwargs_)\n\nCreate a new parameter modeled by the parameter definition function _func_ with the given arguments. This function is mostly useful if you want to use a custom backend that implements parameter definition functions not included in BBopt by default.\n\n### Writing Your Own Backend\n\nBBopt's backend system is built to be extremely extensible, allowing anyone to write and register their own BBopt backends. The basic template for writing a BBopt backend is as follows:\n```python\nfrom bbopt.backends.util import StandardBackend\n\nclass MyBackend(StandardBackend):\n    backend_name = \"my-backend\"\n    implemented_funcs = [\n        # list the random functions you support here\n        #  (you don't need to include all random functions,\n        #  only base random functions, primarily randrange,\n        #  choice, uniform, and normalvariate)\n        ...,\n    ]\n\n    def setup_backend(self, params, **options):\n        # initialize your backend; you can use params\n        #  to get the args for each param\n\n    def tell_data(self, new_data, new_losses):\n        # load new data points into your backend; new_data is\n        #  a list of dictionaries containing data and new_losses\n        #  is a list of losses for each of those data points\n\n    def get_next_values(self):\n        # return the values you want to use for this run as a dict\n\nMyBackend.register()\nMyBackend.register_alg(\"my_alg\")\n```\n\nOnce you've written a BBopt backend as above, you simply need to import it to trigger the `register` calls and enable it to be used in BBopt. For some example BBopt backends, see BBopt's default backends (written in [Coconut](http://coconut-lang.org/)):\n\n- [`random.coco`](https://github.com/evhub/bbopt/blob/master/bbopt-source/backends/random.coco)\n- [`skopt.coco`](https://github.com/evhub/bbopt/blob/master/bbopt-source/backends/skopt.coco)\n- [`bask.coco`](https://github.com/evhub/bbopt/blob/master/bbopt-source/backends/bask.coco)\n- [`hyperopt.coco`](https://github.com/evhub/bbopt/blob/master/bbopt-source/backends/hyperopt.coco)\n- [`pysot.coco`](https://github.com/evhub/bbopt/blob/master/bbopt-source/backends/pysot.coco)\n- [`serving.coco`](https://github.com/evhub/bbopt/blob/master/bbopt-source/backends/serving.coco)\n- [`mixture.coco`](https://github.com/evhub/bbopt/blob/master/bbopt-source/backends/mixture.coco)\n- [`bandit.coco`](https://github.com/evhub/bbopt/blob/master/bbopt-source/backends/bandit.coco)\n- [`openai.coco`](https://github.com/evhub/bbopt/blob/master/bbopt-source/backends/openai.coco)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fevhub%2Fbbopt","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fevhub%2Fbbopt","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fevhub%2Fbbopt/lists"}