{"id":34094121,"url":"https://github.com/metaevo/metabox","last_synced_at":"2026-04-02T01:01:29.307Z","repository":{"id":151825457,"uuid":"619144746","full_name":"MetaEvo/MetaBox","owner":"MetaEvo","description":"MetaBox: Benchmarking Platform for Meta-Black-Box Optimization ","archived":false,"fork":false,"pushed_at":"2025-10-10T13:13:12.000Z","size":459134,"stargazers_count":155,"open_issues_count":1,"forks_count":16,"subscribers_count":2,"default_branch":"v2.0.0","last_synced_at":"2025-12-16T18:40:04.501Z","etag":null,"topics":["benchmark-platform","black-box-optimization","deep-reinforcement-learning","evolutionary-algorithms","hyperparameter-optimization","learning-to-optimize","meta-black-box-optimization","protein-protein-docking","real-parameter-optimization"],"latest_commit_sha":null,"homepage":"https://metaboxdoc.readthedocs.io/en/stable/index.html","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"bsd-3-clause","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/MetaEvo.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2023-03-26T12:04:26.000Z","updated_at":"2025-12-16T04:10:18.000Z","dependencies_parsed_at":"2023-10-02T12:59:33.929Z","dependency_job_id":"1e9472be-69f1-4ea7-800d-05ed835bd823","html_url":"https://github.com/MetaEvo/MetaBox","commit_stats":null,"previous_names":["metaevo/metabox","gmc-drl/metabox"],"tags_count":3,"template":false,"template_full_name":null,"purl":"pkg:github/MetaEvo/MetaBox","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MetaEvo%2FMetaBox","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MetaEvo%2FMetaBox/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MetaEvo%2FMetaBox/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MetaEvo%2FMetaBox/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/MetaEvo","download_url":"https://codeload.github.com/MetaEvo/MetaBox/tar.gz/refs/heads/v2.0.0","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MetaEvo%2FMetaBox/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31293631,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-01T21:15:39.731Z","status":"ssl_error","status_checked_at":"2026-04-01T21:15:34.046Z","response_time":53,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["benchmark-platform","black-box-optimization","deep-reinforcement-learning","evolutionary-algorithms","hyperparameter-optimization","learning-to-optimize","meta-black-box-optimization","protein-protein-docking","real-parameter-optimization"],"created_at":"2025-12-14T15:00:41.863Z","updated_at":"2026-04-02T01:01:29.275Z","avatar_url":"https://github.com/MetaEvo.png","language":"Python","readme":"\u003cdiv align=\"center\"\u003e\n  \u003cpicture\u003e\n    \u003csource media=\"(prefers-color-scheme: light)\" srcset=\"https://github.com/MetaEvo/MetaBox/blob/v2.0.0/docs/source/_static/MetaBOX-title.png\"\u003e\n    \u003csource media=\"(prefers-color-scheme: dark)\" srcset=\"https://github.com/MetaEvo/MetaBox/blob/v2.0.0/docs/source/_static/MetaBOX-title.png\"\u003e\n    \u003cimg alt=\"MetaBox Logo\" src=\"https://github.com/MetaEvo/MetaBox/blob/v2.0.0/docs/source/_static/MetaBOX-title.png\" width=\"600px\"\u003e\n  \u003c/picture\u003e\n\u003c/div\u003e\n\n\u003cdiv align=\"center\"\u003e\n    \u003ca href=\"https://nips.cc/virtual/2023/poster/73497\"\u003e\u003cimg src=\"https://img.shields.io/badge/NeurIPS-2023-b31b1b?logo=files\u0026logoColor=white\" alt=\"NeurIPS 2023\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://arxiv.org/abs/2505.17745\"\u003e\u003cimg src=\"https://img.shields.io/badge/arXiv-2311.02708-b31b1b?logo=arxiv\u0026logoColor=white\" alt=\"arXiv\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://metaboxdoc.readthedocs.io/en/stable/index.html\"\u003e\u003cimg src=\"https://img.shields.io/badge/docs-passing-brightgreen?logo=read-the-docs\u0026logoColor=white\" alt=\"Documentation\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://github.com/metaevo/metabox\"\u003e\u003cimg src=\"https://visitor-badge.laobi.icu/badge?page_id=metaevo.metabox\" alt=\"visitors\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://pypi.org/project/metaevobox/\"\u003e\u003cimg src=\"https://img.shields.io/pypi/v/metaevobox?logo=pypi\u0026label=PyPI\" alt=\"PyPI Version\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://pepy.tech/projects/metaevobox?timeRange=threeMonths\u0026category=version\u0026includeCIDownloads=true\u0026granularity=daily\u0026viewType=line\u0026versions=2.0.1%2C2.0.0.5%2C2.0.0.4\"\u003e\u003cimg src=\"https://static.pepy.tech/badge/metaevobox?logo=pypi\" alt=\"PyPI Downloads\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://qm.qq.com/q/NKwtJ21qyA\"\u003e\u003cimg src=\"https://img.shields.io/badge/QQ%20Group-952185139-07C160?logo=tencent-qq\u0026logoColor=white\" alt=\"QQ Group\"\u003e\u003c/a\u003e\n\u003c/div\u003e\n\u003ch2 align=\"center\" style=\"font-size: 1.2em; font-weight: normal; margin-top: 20px;\"\u003e\n  Benchmarking Meta-Black-Box Optimization under\u003cbr\u003e\n  Diverse Optimization Scenarios with Efficiency and Flexibility\n\u003c/h2\u003e\n\n**MetaBox-v1 has been accepted as an oral presentation at NeurIPS 2023!**\n\n**MetaBox-v2 has been accepted as poster at NeurIPS 2025!**\n\n😀[Online Documentation](https://metaboxdoc.readthedocs.io/en/stable/index.html) is here, you can get started quickly！😀\n\n\u003cdiv align=\"center\"\u003e\n\u003cimg src=\"https://github.com/MetaEvo/MetaBox/blob/v2.0.0/docs/source/_static/MetaBOX-features.png\" width=\"99%\"\u003e\n\u003c/div\u003e\n\nwe propose MetaBox 2.0 version (MetaBox-v2) as a major upgradation of [MetaBox-v1](https://github.com/MetaEvo/MetaBox/tree/v1.0.0). MetaBox-v2 now supports plentiful optimization scenarios to embrace users from single-objective optimization, multi-objective optimization, multi-modal optimization, multi-task optimization and etc. Correspondingly, **18 optimization problem sets** (synthetic + realistic), **1900+ problem instances** and **36 baseline methods** (traditional optimizers + up-to-date MetaBBOs) are reproduced within MetaBox-v2 to assist various research ideas and comprehensive comparison. To address MetaBBO's inherent efficiency issue, we have optimized low-level implementation of MetaBox-v2 to support parallel meta-training and evaluation, which reduces the running cost from days to hours. More importantly, we have optimized MetaBox-v2's sourcecode to support **sufficient development flexbility**, with clear and sound tutotials correspondingly. Enjoy your journey of learning and using MetaBBO from here!   \n\n\n## Quick Start\n### Installation\n\n\u003e [!Important]\n\u003e Below we install a cpu-version torch for you, if you need install any other versions, \\\n\u003e see [torch](https://pytorch.org/get-started) and replace the corresponding installation instruction below.\n\n```bash\n## create a venv\nconda create -n metaevobox_env python=3.11.5 -y\nconda activate metaevobox_env\n## install pytorch\npip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cpu\n## install metabox\npip install metaevobox\n```\n### Common Usage\n\n\u003e [!Important]\n\u003e The following is the code specific to Linux.\n\u003e If you are using Windows, please add: ```if __name__ == \"__main__\":```\n\n#### Train a MetaBBO baseline\ncreate your_dir, then create a your_train.py file in your_dir, write following codes into your_train.py.\n```python\nfrom metaevobox import Config, Trainer\n# import meta-level agent of MetaBBO you want to meta-train\nfrom metaevobox.baseline.metabbo import GLEET\n# import low-level BBO optimizer of MetaBBO you want to meta-train\nfrom metaevobox.environment.optimizer import GLEET_Optimizer\nfrom metaevobox.environment.problem.utils import construct_problem_set\n\n# put user-specific configuration\nconfig = {'train_problem': 'bbob-10D', # specify the problem set you want to train your MetaBBO \n          'train_batch_size': 16,\n          'train_parallel_mode':'subproc', # choose parallel training mode\n          }\nconfig = Config(config)\n# construct dataset\nconfig, datasets = construct_problem_set(config)\n# initialize your MetaBBO's meta-level agent \u0026 low-level optimizer\ngleet = GLEET(config)\ngleet_opt = GLEET_Optimizer(config)\ntrainer = Trainer(config, gleet, gleet_opt, datasets)\ntrainer.train()\n```\nIf you want to check out the visualized information of the training progress, run following code to start training logger.\n```bash\ncd your_dir/output/tensorboard\ntensorboard --logdir=./\n```\n\n#### Test BBO/MetaBBO baselines\n```python\nfrom metaevobox import Config, Tester, get_baseline\n# import meta-level agent of MetaBBO you want to test\nfrom metaevobox.baseline.metabbo import GLEET\n# import low-level BBO optimizer of MetaBBO you want to test\nfrom metaevobox.environment.optimizer import GLEET_Optimizer\n# import other baselines you want to compare with your MetaBBO\nfrom metaevobox.baseline.bbo import CMAES, SHADE\nfrom metaevobox.environment.problem.utils import construct_problem_set\n\n# specify your configuration\nconfig = {\n    'test_problem':'bbob-10D', # specify the problem set you want to benchmark\n    'test_batch_size':16,\n    'test_difficulty':'difficult', # this is a train-test split mode\n    'baselines':{\n        # your MetaBBO\n        'GLEET':{\n            'agent': 'GLEET',\n            'optimizer': GLEET_Optimizer,\n            'model_load_path': None, # by default is None, we will load a built-in pre-trained checkpoint for you.\n        },\n\n        # Other baselines to compare              \n        'SHADE':{'optimizer': SHADE},\n        'CMAES':{'optimizer':CMAES},\n    },\n}\n\nconfig = Config(config)\n# load test dataset\nconfig, datasets = construct_problem_set(config)\n# initialize all baselines to compare (yours + others)\nbaselines, config = get_baseline(config)\n# initialize tester\ntester = Tester(config, baselines, datasets)\n# test\ntester.test()\n```\nBy default, MetaBox would automatically generate various visualized experimental results in your_dir/output/test/, enjoy these useful analysis results!\n\n### High-level Development Usage\nWe sincerely suggest researchers with interests to check out **[Online Documentation](https://metaboxdoc.readthedocs.io/en/stable/index.html)** for further flexible usege of MetaBox-v2, such as implementing your own MetaBBO, customized experimental design \u0026 analysis, using pre-collected metadata and seamless API calling with other famous optimization repos.\n\n\n## Available Optimization Problem Set in MetaBox\n\n\u003ctable\u003e\n  \u003cthead\u003e\n    \u003ctr\u003e\n      \u003cth rowspan=\"2\" align=\"center\"\u003eType\u003c/th\u003e \u003c!-- Center the Type column --\u003e\n      \u003cth colspan=\"3\" align=\"center\"\u003eProblem Set\u003c/th\u003e \u003c!-- Center the Problem Set columns --\u003e\n      \u003cth rowspan=\"2\" align=\"center\"\u003eDescription\u003c/th\u003e \u003c!-- Center the Description column --\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003cth align=\"center\"\u003eName\u003c/th\u003e\n      \u003cth align=\"center\"\u003ePaper\u003c/th\u003e\n      \u003cth align=\"center\"\u003eCode\u003c/th\u003e\n    \u003c/tr\u003e\n  \u003c/thead\u003e\n  \u003ctbody\u003e\n    \u003ctr\u003e\n      \u003ctd rowspan=\"7\" align=\"center\"\u003eSingle-Objective Optimization\u003c/td\u003e \u003c!-- Center the Type column --\u003e\n      \u003ctd align=\"center\"\u003ebbob\u003c/td\u003e\n      \u003ctd\u003e\u003ca href=\"https://arxiv.org/pdf/1603.08785\"\u003ePaper\u003c/a\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003ca href=\"https://github.com/numbbo/coco\"\u003eCode\u003c/a\u003e\u003c/td\u003e\n      \u003ctd\u003ebbob is based on CoCo platform, which includes 96 representative single-objective synthetic problem instances. These instances all originate from the same group of 24 objective functions (CoCo-BBOB), which have been used in many papers and widely accepted as golden standard for evaluating the robustbess of an optimizer. In MetaBox-v2, bbob includes 4 subsets: bbob-10D, bbob-30D, bbob-noisy-10D and bbob-noisy-30D, each of them contains the 24 functions. \"noisy\" here indicates that the function's objective value is added with a gaussian noise before it is output, which significantly increase the solving difficulty. \u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd align=\"center\"\u003ebbob-surrogate\u003c/td\u003e\n      \u003ctd\u003e\u003ca href=\"https://arxiv.org/abs/2503.18060\"\u003ePaper\u003c/a\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003ca href=\"https://github.com/GMC-DRL/Surr-RLDE\"\u003eCode\u003c/a\u003e\u003c/td\u003e\n      \u003ctd\u003ebbob-surrogate includes 72 problem instances, each of which is a surrogate model. In specific, it can be divided into 3 subsets: bbob-surrogate-2D, bbob-surrogate-5D and bbob-surrogate-10D, each of which corresponds to 24 bbob problems. We first train KAN or MLP networks to fit 24 black box functions from bbob, then use the one with more accuracy as the surrogate model. This set is mainly developed for users who aims at exploring the potential of surrogate model in MetaBBO.\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctd align=\"center\"\u003ehpo-b\u003c/td\u003e\n      \u003ctd\u003e\u003ca href=\"https://arxiv.org/pdf/2106.06257\"\u003ePaper\u003c/a\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003ca href=\"https://github.com/machinelearningnuremberg/HPO-B\"\u003eCode\u003c/a\u003e\u003c/td\u003e\n      \u003ctd\u003ehpo-b is an autoML hyper-parameter optimization benchmark which includes a wide range of hyperparameter optimization tasks for 16 different model types (e.g., SVM, XGBoost, etc.), resulting in a total of 935 problem instances. The dimension of these problem instances range from 2 to 16. We also note that HPO-B represents problems with ill-conditioned landscape such as huge flattern.\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd align=\"center\"\u003euav\u003c/td\u003e\n      \u003ctd\u003e\u003ca href=\"https://arxiv.org/abs/2501.14503\"\u003ePaper\u003c/a\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003ca href=\"https://zenodo.org/records/12793991\"\u003eCode\u003c/a\u003e\u003c/td\u003e\n      \u003ctd\u003e uav provides 56 terrain-based landscapes as realistic Unmanned Aerial Vehicle(UAV) path planning problems, each of which is 30D. The objective is to select given number of path nodes (x,y,z coordinates) from the 3D space, so the the UAV could fly as shortly as possible in a collision-free way.  \u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd align=\"center\"\u003ene\u003cbr\u003e(large-scale)\u003c/td\u003e\n      \u003ctd\u003e\u003ca href=\"https://ieeexplore.ieee.org/abstract/document/10499977\"\u003ePaper\u003c/a\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003ca href=\"https://github.com/EMI-Group/evox\"\u003eCode\u003c/a\u003e\u003c/td\u003e\n      \u003ctd\u003eThis problem set is based on the neuroevolution interfaces in \u003ca href=\"https://evox.readthedocs.io/en/latest/examples/brax.html\"\u003eEvoX\u003c/a\u003e. The goal is to optimize the parameters of neural network-based RL agents for a series of Robotic Control tasks. We pre-define 11 control tasks (e.g., swimmer, ant, walker2D etc.), and 6 MLP structures with 0~5 hidden layers. The combinations of task \u0026 network structure result in 66 problem instances, which feature extremely high-dimensional problems (\u003e=1000D).\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd align=\"center\"\u003eprotein\u003c/td\u003e\n      \u003ctd\u003e\u003ca href=\"https://onlinelibrary.wiley.com/doi/abs/10.1002/prot.22830\"\u003ePaper\u003c/a\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003ca href=\"https://zlab.wenglab.org/benchmark/\"\u003eCode\u003c/a\u003e\u003c/td\u003e\n      \u003ctd\u003eprotein-docking benchmark, where the objective is to minimize the Gibbs free energy resulting from protein-protein interaction between a given complex and any other conformation. We select 28 protein complexes and randomly initialize 10 starting points for each complex, resulting in 280 problem instances. To simplify the problem structure, we only optimize 12 interaction points in a complex instance (12D problem).\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd align=\"center\"\u003elsgo\u003cbr\u003e(large-scale)\u003c/td\u003e\n      \u003ctd\u003e\u003ca href=\"https://al-roomi.org/multimedia/CEC_Database/CEC2015/LargeScaleGlobalOptimization/CEC2015_LargeScaleGO_TechnicalReport.pdf\"\u003ePaper\u003c/a\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003ca href=\"https://github.com/dmolina/cec2013lsgo\"\u003eCode\u003c/a\u003e\u003c/td\u003e\n      \u003ctd\u003e\n        lsgo contains 20 large-scale problems instances (\u003e=905D. \u003c=1000D):\n        \u003cbr\u003e\n        \u003col\u003e\n          \u003cli\u003eFully-separable functions (F1-F3)\u003c/li\u003e\n          \u003cli\u003eTwo types of partially separable functions:\n            \u003col\u003e\n              \u003cli\u003ePartially separable functions with a set of non-separable subcomponents and one fully-separable subcomponents (F4-F7)\u003c/li\u003e\n              \u003cli\u003ePartially separable functions with only a set of non-separable subcomponents and no fully-separable subcomponent (F8-F11)\u003c/li\u003e\n            \u003c/ol\u003e\n          \u003c/li\u003e\n          \u003cli\u003eTwo types of overlapping functions:\n            \u003col\u003e\n              \u003cli\u003eOverlapping functions with conforming subcomponents (F12-F13)\u003c/li\u003e\n              \u003cli\u003eOverlapping functions with conflicting subcomponents (F14)\u003c/li\u003e\n            \u003c/ol\u003e\n          \u003c/li\u003e\n          \u003cli\u003eFully-nonseparable functions (F15)\u003c/li\u003e\n        \u003c/ol\u003e\n      \u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd rowspan=\"2\" align=\"center\"\u003eMulti-Objective Optimization\u003c/td\u003e \u003c!-- Center the Type column --\u003e\n      \u003ctd align=\"center\"\u003emoo-synthetic\u003c/td\u003e\n      \u003ctd\u003e\n        \u003ca href=\"https://ieeexplore.ieee.org/abstract/document/6787994\"\u003eZDT\u003c/a\u003e\u003cbr\u003e\n        \u003ca href=\"https://www.al-roomi.org/multimedia/CEC_Database/CEC2009/MultiObjectiveEA/CEC2009_MultiObjectiveEA_TechnicalReport.pdf\"\u003eUF\u003c/a\u003e\u003cbr\u003e\n        \u003ca href=\"https://ieeexplore.ieee.org/abstract/document/1007032\"\u003eDTLZ\u003c/a\u003e\u003cbr\u003e\n        \u003ca href=\"https://ieeexplore.ieee.org/abstract/document/1705400\"\u003eWFG\u003c/a\u003e\n      \u003c/td\u003e\n      \u003ctd\u003e\u003ca href=\"https://github.com/anyoptimization/pymoo\"\u003eCode\u003c/a\u003e\u003c/td\u003e\n      \u003ctd\u003e moo-synthetic is constructed by mixing 4 well-known multi-objective problem sets: ZDT, UF, DTLZ and WFG. In total, we have constructed 187 problem instances. Their objective numbers range from 2~10, dimensions range from 6D~38D. \u003c/td\u003e\n   \u003c/tr\u003e \n   \u003ctr\u003e\n      \u003ctd align=\"center\"\u003emoo-uav\u003c/td\u003e\n      \u003ctd\u003e\n        \u003ca href=\"https://ieeexplore.ieee.org/abstract/document/6787994\"\u003epaper\u003c/a\u003e\u003cbr\u003e\n      \u003c/td\u003e\n      \u003ctd\u003e\u003ca href=\"https://github.com/anyoptimization/pymoo\"\u003eCode\u003c/a\u003e\u003c/td\u003e\n      \u003ctd\u003e We decompose the objective value of instances in uav into 5 separate objectives, which results in 56 30D realistic 5-objective problem instances. \u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd rowspan=\"1\" align=\"center\"\u003eMulti-Model Optimization\u003c/td\u003e \u003c!-- Center the Type column --\u003e\n      \u003ctd align=\"center\"\u003emmo\u003c/td\u003e\n      \u003ctd\u003e\u003ca href=\"https://web.xidian.edu.cn/xlwang/files/20150312_175833.pdf\"\u003ePaper\u003c/a\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003ca href=\"https://github.com/mikeagn/CEC2013\"\u003eCode\u003c/a\u003e\u003c/td\u003e\n      \u003ctd\u003e mmo is based on CEC2013LSGO benchmark and specially crafeted for multi-modal optimization, which includes 20 synthetic problem instances covering various dimensions (1D~20D), each with varied number of (1 ~ 216) global optima. Among them, F1 to F5 are simple uni-modal functions, F6 to F10 are dimension-scalable functions with multiple global optima, and F11 to F20 are complex composition functions with challenging landscapes.\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd rowspan=\"3\" align=\"center\"\u003eMulti-Task Optimization\u003c/td\u003e \u003c!-- Center the Type column --\u003e\n      \u003ctd align=\"center\"\u003ecec2017mto\u003c/td\u003e\n      \u003ctd\u003e\u003ca href=\"https://arxiv.org/abs/1706.03470\"\u003ePaper\u003c/a\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003ca href=\"http://www.bdsc.site/websites/MTO/index.html\"\u003eCode\u003c/a\u003e\u003c/td\u003e\n      \u003ctd\u003e cec2017mto comprises 9 multi-task problem instances, each of which contains two basic problems. Optional basic problems include Shpere, Rosenbrock, Ackley, Rastrigin, Griewank, Weierstrass and Schwefel, with dimension ranging from 25D~50D. \u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd align=\"center\"\u003ewcci2020\u003c/td\u003e\n      \u003ctd\u003e\u003ca href=\"http://www.bdsc.site/websites/MTO_competition_2020/MTO_Competition_WCCI_2020.html\"\u003ePaper\u003c/a\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003ca href=\"http://www.bdsc.site/websites/MTO/index.html\"\u003eCode\u003c/a\u003e\u003c/td\u003e\n      \u003ctd\u003e wcci2020 comprises 10 multi-task problem instances, each of which contains 50 basic problems. Optional basic problems include Shpere, Rosenbrock, Ackley, Rastrigin, Griewank, Weierstrass and Schwefel, which are all 50D. \u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd align=\"center\"\u003eaugmented-wcci2020\u003c/td\u003e\n      \u003ctd\u003e\u003ca href=\"http://www.bdsc.site/websites/MTO_competition_2020/MTO_Competition_WCCI_2020.html\"\u003ePaper\u003c/a\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003ca href=\"http://www.bdsc.site/websites/MTO/index.html\"\u003eCode\u003c/a\u003e\u003c/td\u003e\n      \u003ctd\u003e augmented-wcci2020 comprises 127 multi-task problems, each of which optinally contains 1~7 basic problems. Optional basic problems include Shpere, Rosenbrock, Ackley, Rastrigin, Griewank, Weierstrass and Schwefel, which are all 50D. \u003c/td\u003e\n    \u003c/tr\u003e\n  \u003c/tbody\u003e\n\u003c/table\u003e\n\n\n## Available BBO/MetaBBO Baselines in MetaBox\n\n|Baseline Name|Target Optimization Scenario|Type|Paper|Year|\n|---|---|---|---|---|\n|[Random_search](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/bbo/random_search.py)|||||\n|[PSO](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/bbo/pso.py)|Single-Objective Optimization|BBO|[Particle swarm optimization](https://ieeexplore.ieee.org/abstract/document/488968)|1995|\n|[DE](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/bbo/de.py)|Single-Objective Optimization|BBO|[Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces](https://dl.acm.org/doi/abs/10.1023/A%3A1008202821328)|1997|\n|[CMAES](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/bbo/cmaes.py)|Single-Objective Optimization|BBO|[Completely Derandomized Self-Adaptation in Evolution Strategies](https://ieeexplore.ieee.org/document/6790628)|2001|\n|[SHADE](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/bbo/shade.py)|Single-Objective Optimization|BBO|[Success-history based parameter adaptation for differential evolution](https://ieeexplore.ieee.org/document/6557555)|2013|\n|[GLPSO](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/bbo/glpso.py)|Single-Objective Optimization|BBO|[Genetic Learning Particle Swarm Optimization](https://ieeexplore.ieee.org/abstract/document/7271066/)|2015|\n|[SDMSPSO](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/bbo/sdmspso.py)|Single-Objective Optimization|BBO|[A Self-adaptive Dynamic Particle Swarm Optimizer](https://ieeexplore.ieee.org/document/7257290)|2015|\n|[SAHLPSO](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/bbo/sahlpso.py)|Single-Objective Optimization|BBO|[Self-Adaptive two roles hybrid learning strategies-based particle swarm optimization](https://www.sciencedirect.com/science/article/pii/S0020025521006988)|2021|\n|[JDE21](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/bbo/sahlpso.py)|Single-Objective Optimization|BBO|[Self-adaptive Differential Evolution Algorithm with Population Size Reduction for Single Objective Bound-Constrained Optimization: Algorithm j21](https://ieeexplore.ieee.org/document/9504782)|2021|\n|[MADDE](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/bbo/madde.py)|Single-Objective Optimization|BBO|[Improving Differential Evolution through Bayesian Hyperparameter Optimization](https://ieeexplore.ieee.org/document/9504792)|2021|\n|[NLSHADELBC](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/bbo/nlshadelbc.py)|Single-Objective Optimization|BBO|[NL-SHADE-LBC algorithm with linear parameter adaptation bias change for CEC 2022 Numerical Optimization](https://ieeexplore.ieee.org/abstract/document/9870295)|2022|\n|[MOEAD](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/bbo/moead.py)|Multi-Objective Optimization|BBO|[MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition](https://ieeexplore.ieee.org/document/4358754)|2007|\n|[MFEA](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/bbo/mfea.py)|Multi-Task Optimization|BBO|[Multifactorial Evolution: Toward Evolutionary Multitasking](https://ieeexplore.ieee.org/abstract/document/7161358)|2016|\n|[RNNOPT](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/metabbo/rnnopt.py)|Single-Objective Optimization|MetaBBO|[Learning to learn without gradient descent by gradient descent](https://dl.acm.org/doi/10.5555/3305381.3305459)|2017|\n|[QLPSO](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/metabbo/qlpso.py)|Single-Objective Optimization|MetaBBO|[A reinforcement learning-based communication topology in particle swarm optimization](https://link.springer.com/article/10.1007/s00521-019-04527-9)|2019|\n|[DEDDQN](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/metabbo/deddqn.py)|Single-Objective Optimization|MetaBBO|[Deep reinforcement learning based parameter control in differential evolution](https://dl.acm.org/doi/10.1145/3321707.3321813)|2019|\n|[DEDQN](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/metabbo/dedqn.py)|Single-Objective Optimization|MetaBBO|[Differential evolution with mixed mutation strategy based on deep reinforcement learning](https://www.sciencedirect.com/science/article/pii/S1568494621005998)|2021|\n|[LDE](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/metabbo/lde.py)|Single-Objective Optimization|MetaBBO|[Learning Adaptive Differential Evolution Algorithm From Optimization Experiences by Policy Gradient](https://ieeexplore.ieee.org/document/9359652)|2021|\n|[RLPSO](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/metabbo/rlpso.py)|Single-Objective Optimization|MetaBBO|[Employing reinforcement learning to enhance particle swarm optimization methods](https://www.tandfonline.com/doi/full/10.1080/0305215X.2020.1867120)|2021|\n|[RLEPSO](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/metabbo/rlepso.py)|Single-Objective Optimization|MetaBBO|[RLEPSO:Reinforcement learning based Ensemble particle swarm optimizer](https://dl.acm.org/doi/abs/10.1145/3508546.3508599)|2022|\n|[RLHPSDE](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/metabbo/rlhpsde.py)|Single-Objective Optimization|MetaBBO|[Differential evolution with hybrid parameters and mutation strategies based on reinforcement learning](https://www.sciencedirect.com/science/article/pii/S2210650222001602)|2022|\n|[NRLPSO](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/metabbo/nrlpso.py)|Single-Objective Optimization|MetaBBO|[Reinforcement learning-based particle swarm optimization with neighborhood differential mutation strategy](https://www.sciencedirect.com/science/article/abs/pii/S2210650223000482)|2023|\n|[OPRO](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/metabbo/opro.py)|Single-Objective Optimization|MetaBBO|[Large Language Models as Optimizers](https://arxiv.org/abs/2309.03409)|2024|\n|[RLDAS](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/metabbo/rldas.py)|Single-Objective Optimization|MetaBBO|[Deep Reinforcement Learning for Dynamic Algorithm Selection: A Proof-of-Principle Study on Differential Evolution](https://ieeexplore.ieee.org/abstract/document/10496708)|2024|\n|[SYMBOL](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/metabbo/symbol.py)|Single-Objective Optimization|MetaBBO|[SYMBOL: Generating Flexible Black-Box Optimizers through Symbolic Equation Learning](https://openreview.net/forum?id=vLJcd43U7a)|2024|\n|[GLEET](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/metabbo/gleet.py)|Single-Objective Optimization|MetaBBO|[Auto-configuring Exploration-Exploitation Tradeoff in Evolutionary Computation via Deep Reinforcement Learning](https://arxiv.org/abs/2404.08239)|2024|\n|[RLDEAFL](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/metabbo/rldeafl.py)|Single-Objective Optimization|MetaBBO|[Reinforcement Learning-based Self-adaptive Differential Evolution through Automated Landscape Feature Learning](https://arxiv.org/abs/2503.18061)|2025|\n|[Surr_RLDE](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/metabbo/surrrlde.py)|Single-Objective Optimization|MetaBBO|[Surrogate Learning in Meta-Black-Box Optimization: A Preliminary Study](https://arxiv.org/abs/2503.18060)|2025|\n|[MADAC](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/metabbo/madac.py)|Multi-Objective Optimization|MetaBBO|[Multi-agent Dynamic Algorithm Configuration](https://proceedings.neurips.cc/paper_files/paper/2022/hash/7f02b39c0424cc4a422994289ca03e46-Abstract-Conference.html)|2022|\n|[LGA](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/metabbo/lga.py)|Large Scale Global Optimization|MetaBBO|[Discovering Attention-Based Genetic Algorithms via Meta-Black-Box Optimization](https://dl.acm.org/doi/abs/10.1145/3583131.3590496)|2023|\n|[LES](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/metabbo/les.py)|Large Scale Global Optimization|MetaBBO|[Discovering evolution strategies via meta-black-box optimization](https://iclr.cc/virtual/2023/poster/11005)|2023|\n|[GLHF](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/metabbo/glhf.py)|Large Scale Global Optimization|MetaBBO|[Pretrained Optimization Model for Zero-Shot Black Box Optimization](https://link.springer.com/article/10.1007/s00521-019-04527-9)|2024|\n|[B2OPT](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/metabbo/b2opt.py)|Large Scale Global Optimization|MetaBBO|[B2Opt: Learning to Optimize Black-box Optimization with Little Budget](https://ojs.aaai.org/index.php/AAAI/article/view/34036)|2025|\n|[PSORLNS](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/metabbo/psorlns.py)|Multi-Modal Optimization|MetaBBO|[A reinforcement learning-based neighborhood search operator for multi-modal optimization and its applications](https://www.sciencedirect.com/science/article/abs/pii/S0957417424000150)|2024|\n|[RLEMMO](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/metabbo/rlemmo.py)|Multi-Modal Optimization|MetaBBO|[RLEMMO: Evolutionary Multimodal Optimization Assisted By Deep Reinforcement Learning](https://dl.acm.org/doi/abs/10.1145/3638529.3653995)|2024|\n|[L2T](https://github.com/MetaEvo/MetaBox/blob/v2.0.0/src/baseline/metabbo/l2t.py)|Multi-Task Optimization|MetaBBO|[Learning to Transfer for Evolutionary Multitasking](https://arxiv.org/abs/2406.14359)|2024|\n\n\n## Citing MetaBox\n\nThe PDF version of the paper is available [here](https://arxiv.org/abs/2310.08252). If you find our MetaBox useful, please cite it in your publications or projects.\n\n```latex\n@inproceedings{metabox-v1,\nauthor={Ma, Zeyuan and Guo, Hongshu and Chen, Jiacheng and Li, Zhenrui and Peng, Guojun and Gong, Yue-Jiao and Ma, Yining and Cao, Zhiguang},\ntitle={MetaBox: A Benchmark Platform for Meta-Black-Box Optimization with Reinforcement Learning},\nbooktitle = {Advances in Neural Information Processing Systems},\nyear={2023},\n}\n\n@inproceedings{metabox-v2,\ntitle={MetaBox-v2: A Unified Benchmark Platform for Meta-Black-Box Optimization},\nauthor={Zeyuan Ma and Yue-Jiao Gong and Hongshu Guo and Wenjie Qiu and Sijie Ma and Hongqiao Lian and Jiajun Zhan and Kaixu Chen and Chen Wang and Zhiyang Huang and Zechuan Huang and Guojun Peng and Ran Cheng and Yining Ma},\nbooktitle={Advances in Neural Information Processing Systems},\nyear={2025},\n}\n```\n\n## 😁Contact Us\n\u003cdiv align=\"center\"\u003e\n\u003cimg src=\"https://github.com/MetaEvo/.github/blob/main/profile/logo.png\" width=\"20%\"\u003e\n\u003c/div\u003e\n👨‍💻👩‍💻We are a research team mainly focus on Meta-Black-Box-Optimization (MetaBBO)\n     which assists automated algorithm design for Evolutionary Computation. \n\nHere is our [homepage](https://metaevo.github.io/) and [github](https://github.com/MetaEvo). **🥰🥰🥰Please feel free to contact us—any suggestions are welcome!**\n\nIf you have any question or want to contact us：\n- 🌱Fork, Add, and Merge\n- ❓️Report an [issue](https://github.com/MetaEvo/MetaBox/issues)\n- 📧Contact WenJie Qiu ([wukongqwj@gmail.com](mailto:wukongqwj@gmail.com))\n- 🚨**We warmly invite you to join our QQ group for further communication (Group Number: 952185139).**\n\n\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmetaevo%2Fmetabox","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmetaevo%2Fmetabox","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmetaevo%2Fmetabox/lists"}