{"id":42006436,"url":"https://github.com/emi-group/evomo","last_synced_at":"2026-01-26T02:01:10.259Z","repository":{"id":284500849,"uuid":"786765607","full_name":"EMI-Group/evomo","owner":"EMI-Group","description":"EvoMO is a GPU-accelerated library for evolutionary multiobjective optimization (EMO)","archived":false,"fork":false,"pushed_at":"2026-01-01T18:44:49.000Z","size":1028,"stargazers_count":200,"open_issues_count":1,"forks_count":24,"subscribers_count":11,"default_branch":"main","last_synced_at":"2026-01-07T02:25:49.450Z","etag":null,"topics":["evolutionary-algorithms","evolutionary-computation","gpu-acceleration","gpu-computing","multiobjective-optimization","pytorch"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/EMI-Group.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2024-04-15T09:04:28.000Z","updated_at":"2026-01-06T00:26:20.000Z","dependencies_parsed_at":"2025-05-02T09:26:57.255Z","dependency_job_id":"95b3c1f5-fc8c-43ad-a208-ea4ea41f65cd","html_url":"https://github.com/EMI-Group/evomo","commit_stats":null,"previous_names":["emi-group/evomo"],"tags_count":6,"template":false,"template_full_name":null,"purl":"pkg:github/EMI-Group/evomo","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EMI-Group%2Fevomo","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EMI-Group%2Fevomo/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EMI-Group%2Fevomo/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EMI-Group%2Fevomo/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/EMI-Group","download_url":"https://codeload.github.com/EMI-Group/evomo/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EMI-Group%2Fevomo/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28764403,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-26T00:37:26.264Z","status":"online","status_checked_at":"2026-01-26T02:00:08.215Z","response_time":59,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["evolutionary-algorithms","evolutionary-computation","gpu-acceleration","gpu-computing","multiobjective-optimization","pytorch"],"created_at":"2026-01-26T02:01:01.135Z","updated_at":"2026-01-26T02:01:10.249Z","avatar_url":"https://github.com/EMI-Group.png","language":"Python","readme":"\u003ch1 align=\"center\"\u003e\n  \u003ca href=\"https://github.com/EMI-Group/evox\"\u003e\n  \u003cpicture\u003e\n    \u003csource media=\"(prefers-color-scheme: dark)\" srcset=\"docs/images/evox_logo_dark.png\"\u003e\n    \u003csource media=\"(prefers-color-scheme: light)\" srcset=\"docs/images/evox_logo_light.png\"\u003e\n      \u003cimg alt=\"EvoX Logo\" height=\"50\" src=\"docs/images/evox_logo_light.png\"\u003e\n  \u003c/picture\u003e\n  \u003c/a\u003e\n  \u003cbr\u003e\n\u003c/h1\u003e\n\n\u003ch2 align=\"center\"\u003e\n🌟 EvoMO: Bridging Evolutionary Multiobjective Optimization and GPU Acceleration via Tensorization 🌟\n\u003c/h2\u003e\n\n\u003cdiv align=\"center\"\u003e\n  \u003ca href=\"http://arxiv.org/abs/2503.20286\"\u003e\n    \u003cimg src=\"https://img.shields.io/badge/paper-arxiv-red?style=for-the-badge\" alt=\"EvoMO Paper on arXiv\"\u003e\n  \u003c/a\u003e\n\u003c/div\u003e\n\n## Table of Contents\n\n1. [Overview](#Overview)\n2. [Key Features](#key-features)\n3. [Installation Guide](#installation-guide)\n4. [Examples](#examples)\n5. [Publications on EvoMO](#publications-on-evomo)\n6. [Community \u0026 Support](#community--support)\n7. [Citing](#Citing-evomo)\n8. [Contributors](#Contributors)\n\n## Overview  \n\nEvoMO is a GPU-accelerated library for evolutionary multiobjective optimization (EMO) that leverages advanced tensorization techniques. By transforming key data structures and operations into tensor representations, EvoMO enables more efficient mathematical modeling and delivers significant performance improvements. Designed with scalability in mind, EvoMO can efficiently handle large populations and complex optimization tasks. Additionally, EvoMO includes MoRobtrol, a multiobjective robot control benchmark suite, providing a platform for testing tensorized EMO algorithms in real-world, black-box environments. EvoMO is a sister project of [EvoX](https://github.com/EMI-Group/evox).  \n\n\u003e [!NOTE]\n\u003e To use the JAX version of EvoMO, you can switch to the `v0.0.1-dev` branch. This branch is fully compatible with EvoX version 0.9.0.\n\u003e \n## Key Features\n\n### 💻 High-Performance Computing\n\n#### 🚀 General Tensorization Methodology\n- **EvoMO** adopts a unified tensorization approach, restructuring EMO algorithms into tensor representations, enabling efficient GPU acceleration.\n\n#### ⚡ Ultra Performance\n- Supports tensorized implementations of **NSGA-II**, **NSGA-III**, **MOEA/D**, **RVEA**, **HypE**, and more, achieving up to **1113× speedup** while preserving solution quality.\n\n#### 📈 Scalability\n- Handles large populations, scaling to hundreds of thousands for complex optimization tasks, ensuring scalability for real-world applications.\n\n\n### 📊 Benchmarking\n\n#### 🤖 MoRobtrol Benchmark\n- Includes **MoRobtrol**, a multiobjective robot control benchmark, for testing tensorized EMO algorithms in challenging black-box environments.\n\n### 🔧 Easy-to-Use Integration\n\n#### 📦 Standalone EvoMO Package\n- EvoMO is now available as an independent repository, allowing users to easily access multiobjective optimization algorithms and benchmark problems via `import evomo` for improved discoverability and usability.\n\n\n## Installation Guide\n\n\nTo install EvoMO, you need to install EvoX first. \n\n\n1. Install EvoX:\n\n```bash\npip install evox\n```\n\n   \n2. Install EvoMO:\n\n```bash\npip install evomo\n```\n\n\nFor the latest development version, you can install from the source:\n\n```bash\ngit clone https://github.com/EMI-Group/evomo.git\ncd evomo\npip install -e.\n```\n\n## Examples\n\n### Numerical optimization problem\n\nSolve the DTLZ2 problem using the TensorMOEA/D algorithm:\n\n```python\nimport time\n\nimport torch\nfrom evox.metrics import igd\nfrom evox.problems.numerical import DTLZ2\nfrom evox.workflows import StdWorkflow\n\nfrom evomo.algorithms import TensorMOEAD\n\nif __name__ == \"__main__\":\n    torch.set_default_device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n\n    algo = TensorMOEAD(pop_size=100, n_objs=3, lb=-torch.zeros(12), ub=torch.ones(12))\n    prob = DTLZ2(m=3)\n    pf = prob.pf()\n    workflow = StdWorkflow(algo, prob)\n    workflow.init_step()\n    jit_state_step = workflow.step\n\n    t = time.time()\n    for i in range(100):\n        print(i)\n        jit_state_step()\n        fit = workflow.algorithm.fit\n        fit = fit[~torch.any(torch.isnan(fit), dim=1)]\n        print(f\"Generation {i + 1} IGD: {igd(fit, pf)}\")\n\n    print(f\"Total time: {time.time() - t} seconds\")\n```\n\n\u003e [!NOTE]  \n\u003e **For Windows users**: If you encounter `FileNotFoundError: [Error 2] No such file or directory: 'C:\\\\Users\\\\...'`, it may be caused by the system path length limitation.  \n\u003e Please enable long path support to resolve this issue.\n\n\n\n### MoRobtrol\n\nSolve the MoSwimmer problem in MoRobtrol using the TensorMOEA/D algorithm:\n\n```python\nimport time\n\nimport torch\nimport torch.nn as nn\nfrom evox.utils import ParamsAndVector\nfrom evox.workflows import EvalMonitor, StdWorkflow\n\nfrom evomo.algorithms import TensorMOEAD\nfrom evomo.problems.neuroevolution import MoRobtrol\n\n\nclass SimpleMLP(nn.Module):\n    def __init__(self):\n        super(SimpleMLP, self).__init__()\n        self.features = nn.Sequential(nn.Linear(8, 4), nn.Tanh(), nn.Linear(4, 2))\n\n    def forward(self, x):\n        return torch.tanh(self.features(x))\n\n\ndef setup_workflow(model, pop_size, max_episode_length, num_episodes, device):\n    adapter = ParamsAndVector(dummy_model=model)\n    model_params = dict(model.named_parameters())\n    pop_center = adapter.to_vector(model_params)\n    lower_bound = torch.full_like(pop_center, -5)\n    upper_bound = torch.full_like(pop_center, 5)\n\n    problem = MoRobtrol(\n        policy=model,\n        env_name=\"mo_swimmer\",\n        max_episode_length=max_episode_length,\n        num_episodes=num_episodes,\n        pop_size=pop_size,\n        device=device,\n        num_obj=2,\n        observation_shape=8,\n        obs_norm=torch.tensor([5.0, 1e-6, 1e6], device=device),\n    )\n\n    algorithm = TensorMOEAD(\n        pop_size=pop_size, lb=lower_bound, ub=upper_bound, n_objs=2, device=device\n    )\n    monitor = EvalMonitor(device=device)\n\n    workflow = StdWorkflow(\n        algorithm=algorithm,\n        problem=problem,\n        monitor=monitor,\n        opt_direction=\"max\",\n        solution_transform=adapter,\n        device=device,\n    )\n    return workflow\n\n\ndef run_workflow(workflow, compiled=False, generations=10):\n    workflow.init_step()\n    step_function = torch.compile(workflow.step) if compiled else workflow.step\n    for index in range(generations):\n        print(f\"In generation {index}:\")\n        t = time.time()\n        step_function()\n        print(f\"\\tFitness: {-workflow.algorithm.fit}.\")\n    print(f\"\\tTime elapsed: {time.time() - t: .4f}(s).\")\n\n\nif __name__ == \"__main__\":\n    device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n\n    model = SimpleMLP().to(device)\n    workflow = setup_workflow(model, 12, 100, 2, device)\n    run_workflow(workflow)\n\n```\n\n## Publications on EvoMO\n- Hao Li, Zhenyu Liang, and Ran Cheng, “GPU-accelerated evolutionary many-objective optimization using tensorized NSGA-III,” in *IEEE\nCongress on Evolutionary Computation*, 2025. [[📄 Paper](https://arxiv.org/abs/2504.06067)] | [[🧐 Read More](docs/papers/tensornsga3_cec25.md)]\n- Zhenyu Liang, Tao Jiang, Kebin Sun, and Ran Cheng, “GPU-accelerated evolutionary multiobjective optimization using tensorized RVEA,” in *Proceedings of the Genetic and Evolutionary Computation Conference*, 2024, pp. 566–575. [[📄 Paper](https://arxiv.org/abs/2404.01159)] | [[🧐 Read More](https://github.com/EMI-Group/tensorrvea)]\n\n## Community \u0026 Support\n\nWe welcome contributions and look forward to your feedback!\n- Engage in discussions and share your experiences on [GitHub Issues](https://github.com/EMI-Group/evomo/issues).\n- Join our QQ group (ID: 297969717).\n\n## Citing EvoMO\n\nIf you use EvoMO in your research and want to cite it in your work, please use:\n```\n@article{evomo,\n  title = {Bridging Evolutionary Multiobjective Optimization and {GPU} Acceleration via Tensorization},\n  author = {Liang, Zhenyu and Li, Hao and Yu, Naiwei and Sun, Kebin and Cheng, Ran},\n  journal = {IEEE Transactions on Evolutionary Computation},\n  year = 2025,\n  doi = {10.1109/TEVC.2025.3555605}\n}\n```\n\n## Contributors\n\nThanks to the following people who contributed to this project: [Zhenyu2Liang](https://github.com/Zhenyu2Liang), [Nam-dada](https://github.com/Nam-dada), [LiHao-MS](https://github.com/LiHao-MS), [XU-Boqing](https://github.com/XU-Boqing), [sherry-zx](https://github.com/sherry-zx), [BillHuang2001](https://github.com/BillHuang2001), [ranchengcn](https://github.com/ranchengcn).\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Femi-group%2Fevomo","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Femi-group%2Fevomo","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Femi-group%2Fevomo/lists"}