{"id":32807670,"url":"https://github.com/emi-group/evox","last_synced_at":"2025-11-06T16:02:53.734Z","repository":{"id":65607853,"uuid":"518292119","full_name":"EMI-Group/evox","owner":"EMI-Group","description":"Distributed GPU-Accelerated Framework for Evolutionary Computation. Comprehensive Library of Evolutionary Algorithms \u0026 Benchmark Problems.","archived":false,"fork":false,"pushed_at":"2025-11-06T13:06:00.000Z","size":44878,"stargazers_count":1433,"open_issues_count":0,"forks_count":202,"subscribers_count":67,"default_branch":"main","last_synced_at":"2025-11-06T14:23:18.601Z","etag":null,"topics":["black-box-optimization","brax","derivative-free-optimization","evolutionary-algorithms","evolutionary-computation","evolutionary-optimization","evolutionary-reinforcement-learinig","evolutionary-strategies","gpu-acceleration","gradient-free-optimization","gym","jax","metaheuristics","multi-objective-optimization","neuroevolution","population-based-optimization","pytorch","ray"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/EMI-Group.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2022-07-27T03:11:41.000Z","updated_at":"2025-11-06T13:05:42.000Z","dependencies_parsed_at":"2023-11-06T08:30:40.423Z","dependency_job_id":"cf888e19-f7cc-4139-9d97-a76da4279a25","html_url":"https://github.com/EMI-Group/evox","commit_stats":{"total_commits":307,"total_committers":8,"mean_commits":38.375,"dds":0.1368078175895765,"last_synced_commit":"338e13b0604ba3ac814e59e4119c03a07dc07cf0"},"previous_names":[],"tags_count":30,"template":false,"template_full_name":null,"purl":"pkg:github/EMI-Group/evox","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EMI-Group%2Fevox","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EMI-Group%2Fevox/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EMI-Group%2Fevox/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EMI-Group%2Fevox/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/EMI-Group","download_url":"https://codeload.github.com/EMI-Group/evox/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/EMI-Group%2Fevox/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":283037048,"owners_count":26768591,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-11-06T02:00:06.180Z","response_time":55,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["black-box-optimization","brax","derivative-free-optimization","evolutionary-algorithms","evolutionary-computation","evolutionary-optimization","evolutionary-reinforcement-learinig","evolutionary-strategies","gpu-acceleration","gradient-free-optimization","gym","jax","metaheuristics","multi-objective-optimization","neuroevolution","population-based-optimization","pytorch","ray"],"created_at":"2025-11-06T16:01:59.123Z","updated_at":"2025-11-06T16:02:53.728Z","avatar_url":"https://github.com/EMI-Group.png","language":"Python","readme":"\u003ch1 align=\"center\"\u003e\n  \u003cpicture\u003e\n    \u003csource media=\"(prefers-color-scheme: dark)\" srcset=\"docs/source/_static/evox_logo_dark.png\"\u003e\n    \u003csource media=\"(prefers-color-scheme: light)\" srcset=\"docs/source/_static/evox_logo_light.png\"\u003e\n    \u003cimg alt=\"EvoX Logo\" height=\"128\" width=\"500px\" src=\"docs/source/_static/evox_logo_light.png\"\u003e\n  \u003c/picture\u003e\n\u003c/h1\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003cpicture\u003e\n    \u003csource type=\"image/avif\" srcset=\"docs/source/_static/pso_result.avif\"\u003e\n    \u003cimg src=\"docs/source/_static/pso_result.gif\" alt=\"PSO Result\" height=\"150\"\u003e\n  \u003c/picture\u003e\n  \u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\n  \u003cpicture\u003e\n    \u003csource type=\"image/avif\" srcset=\"docs/source/_static/rvea_result.avif\"\u003e\n    \u003cimg src=\"docs/source/_static/rvea_result.gif\" alt=\"RVEA Result\" height=\"150\"\u003e\n  \u003c/picture\u003e\n  \u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\n  \u003cpicture\u003e\n    \u003csource type=\"image/avif\" srcset=\"docs/source/_static/halfcheetah_200.avif\"\u003e\n    \u003cimg src=\"docs/source/_static/halfcheetah_200.gif\" alt=\"HalfCheetah 200\" height=\"150\"\u003e\n  \u003c/picture\u003e\n\u003c/p\u003e\n\n\n\u003cdiv align=\"center\"\u003e\n  \u003ca href=\"https://arxiv.org/abs/2301.12457\"\u003e\u003cimg src=\"https://img.shields.io/badge/arxiv-2212.05652-red\" alt=\"arXiv\"\u003e\u003c/a\u003e\n  \u003ca href=\"https://evox.readthedocs.io/en/latest/index.html\"\u003e\u003cimg src=\"https://img.shields.io/badge/readthedocs-docs-green?logo=readthedocs\" alt=\"Documentation\"\u003e\u003c/a\u003e\n  \u003ca href=\"https://pypi.org/project/evox/\"\u003e\u003cimg src=\"https://img.shields.io/pypi/v/evox?logo=python\" alt=\"PyPI Version\"\u003e\u003c/a\u003e\n  \u003ca href=\"https://pypi.org/project/evox/\"\u003e\u003cimg src=\"https://img.shields.io/badge/python-3.10+-orange?logo=python\" alt=\"Python Version\"\u003e\u003c/a\u003e\n  \u003ca href=\"https://discord.gg/Vbtgcpy7G4\"\u003e\u003cimg src=\"https://img.shields.io/badge/discord-evox-%235865f2?logo=discord\" alt=\"Discord Server\"\u003e\u003c/a\u003e\n  \u003ca href=\"https://qm.qq.com/q/vTPvoMUGAw\"\u003e\u003cimg src=\"https://img.shields.io/badge/QQ-297969717-%231db4f4?logo=tencentqq\" alt=\"QQ Group\"\u003e\u003c/a\u003e\n\u003c/div\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"./README.md\"\u003e\u003cimg src=\"https://img.shields.io/badge/English-f6f5f4\" alt=\"English README\"\u003e\u003c/a\u003e\n  \u003ca href=\"./README_ZH.md\"\u003e\u003cimg src=\"https://img.shields.io/badge/中文-f6f5f4\" alt=\"中文 README\"\u003e\u003c/a\u003e\n\u003c/p\u003e\n\n---\n\n\u003ch3 align=\"center\"\u003e 🌟Distributed GPU-accelerated Framework for Scalable Evolutionary Computation🌟 \u003c/h3\u003e\n\n---\n\n\n## 🔥 News\n- [2025-05-13] Released **EvoX 1.2.2** - 🚀 EvoX v1.2.2 release is now available, featuring the new Mujoco Playground and an official tutorial! [[Details](https://evox.group/index.php?m=home\u0026c=View\u0026a=index\u0026aid=157)]\n- [2025-05-13] Released **EvoMO 0.2.0**: A GPU-accelerated library for **Evolutionary Multiobjective Optimization**. [[Paper](https://arxiv.org/abs/2503.20286)] [[Code](https://github.com/EMI-Group/evomo)]\n- [2025-02-03] Released **EvoRL**: A GPU-accelerated framework for **Evolutionary Reinforcement Learning**, powered by **JAX** ! [[Paper](https://arxiv.org/abs/2501.15129)] [[Code](https://github.com/EMI-Group/evorl)]\n- [2025-01-30] Released **EvoGP**: A GPU-accelerated framework for **Genetic Programming**, powered by **PyTorch** \u0026 **CUDA**! [[Paper](http://arxiv.org/abs/2501.17168)] [[Code](https://github.com/EMI-Group/evogp)]\n- [2025-01-14] Released **EvoX 1.0.0** - now fully compatible with **PyTorch**, with full `torch.compile` support! Users of the previous **JAX-based version** can access it on the **v0.9.0 branch**.\n\n## Table of Contents\n\n1. [Overview](#Overview)\n2. [Key Features](#key-features)\n3. [Main Contents](#main-contents)\n4. [Installation Guide](#installation-guide)\n5. [Quick Start](#quick-start)\n6. [Sister Projects](#sister-projects)\n7. [Community \u0026 Support](#community--support)\n\n## Overview\n\nEvoX is a distributed GPU-accelerated evolutionary computation framework compatible with **PyTorch**.  With a user-friendly programming model, it offers a comprehensive suite of **50+ Evolutionary Algorithms (EAs)** and a wide range of **100+ Benchmark Problems/Environments**. For more details, please refer to our [Paper](https://arxiv.org/abs/2301.12457) and [Documentation](https://evox.readthedocs.io/en/latest/index.html) / [文档](https://evox.readthedocs.io/zh_CN/latest/index.html).\n\n\u003e [!NOTE]\n\u003e Users of the previous **JAX-based version** can access it on the **v0.9.0 branch**.\n\n\n## Key Features\n\n### 💻 High-Performance Computing\n\n#### 🚀 Ultra Performance\n- Supports acceleration on heterogeneous hardware, including both **CPUs** and **GPUs**, achieving over **100x speedups**.\n- Integrates **distributed workflows** that scale seamlessly across multiple nodes or devices.\n\n#### 🌐 All-in-One Solution\n- Includes **50+ algorithms** for a wide range of use cases, fully supporting **single- and multi-objective optimization**.\n- Provides a **hierarchical architecture** for complex tasks such as **meta learning**, **hyperparameter optimization**, and **neuroevolution**.\n\n#### 🛠️ Easy-to-Use Design\n- Fully compatible with **PyTorch** and its ecosystem, simplifying algorithmic development with a **tailored programming model**.\n- Ensures effortless setup with **one-click installation** for Windows users.\n\n\n### 📊 Versatile Benchmarking\n\n#### 📚 Extensive Benchmark Suites\n- Features **100+ benchmark problems** spanning single-objective optimization, multi-objective optimization, and real-world engineering challenges.\n\n#### 🎮 Support for Physics Engines\n- Integrates seamlessly with physics engines like **Brax** and other popular frameworks for reinforcement learning.\n\n#### ⚙️ Customizable Problems\n- Provides an **encapsulated module** for defining and evaluating custom problems tailored to user needs, with seamless integration into real-world applications and datasets.\n\n\n### 📈 Flexible Visualization\n\n#### 🔍 Ready-to-Use Tools\n- Offers a comprehensive set of **visualization tools** for analyzing evolutionary processes across various tasks.\n\n#### 🛠️ Customizable Modules\n- Enables users to integrate their own **visualization code**, allowing for tailored and flexible visualizations.\n\n#### 📂 Real-Time Data Streaming\n- Leverages the tailored **.exv format** to simplify and accelerate real-time data streaming.\n\n## Main Contents\n\n\u003ctable border=\"1\" cellspacing=\"0\" cellpadding=\"8\" style=\"border-collapse: collapse; width: 100%; text-align: left;\"\u003e\n  \u003cthead\u003e\n    \u003ctr style=\"background-color: #f2f2f2;\"\u003e\n      \u003cth\u003eCategory\u003c/th\u003e\n      \u003cth\u003eSubcategory\u003c/th\u003e\n      \u003cth\u003eNotable Algorithms / Benchmark Problems\u003c/th\u003e\n    \u003c/tr\u003e\n  \u003c/thead\u003e\n  \u003ctbody\u003e\n    \u003ctr\u003e\n      \u003ctd rowspan=\"3\"\u003eSingle-objective Optimization\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003eDifferential Evolution\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003eCoDE, JaDE, SaDE, SHADE, IMODE, ...\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd\u003e\u003cb\u003eEvolution Strategy\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003eCMA-ES, PGPE, OpenES, CR-FM-NES, xNES, ...\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd\u003e\u003cb\u003eParticle Swarm Optimization\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003eFIPS, CSO, CPSO, CLPSO, SL-PSO, ...\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd rowspan=\"3\"\u003eMulti-objective Optimization\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003eDominance-based\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003eNSGA-II, NSGA-III, SPEA2, BiGE, KnEA, ...\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd\u003e\u003cb\u003eDecomposition-based\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003eMOEA/D, RVEA, t-DEA, MOEAD-M2M, EAG-MOEAD, ...\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd\u003e\u003cb\u003eIndicator-based\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003eIBEA, HypE, SRA, MaOEA-IGD, AR-MOEA, ...\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd rowspan=\"2\"\u003eBenchmark Problems / Environments\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003eNumerical\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003eDTLZ, LSMOP, MaF, ZDT, CEC'22, ...\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd\u003e\u003cb\u003eNeuroevolution / RL\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003eBrax, TorchVision Dataset, ...\u003c/td\u003e\n    \u003c/tr\u003e\n  \u003c/tbody\u003e\n\u003c/table\u003e\n\nFor a comprehensive list and detailed descriptions of all algorithms, please check the [Algorithms API](https://evox.readthedocs.io/en/latest/apidocs/evox/evox.algorithms.html), and for benchmark problems/environments, refer to the [Problems API](https://evox.readthedocs.io/en/latest/apidocs/evox/evox.problems.html).\n\n\n## Installation Guide\n\nInstall `evox` with default feature sets via `pip`:\n\n```bash\npip install \"evox[default]\"\n```\n\nInstall the latest version from the source code for testing or development:\n\n```bash\ngit clone https://github.com/EMI-Group/evox.git\ncd evox\npip install -e .\n```\n\n\u003e [!TIP]\n\u003e Windows users can use the [win-install.bat](https://evox.readthedocs.io/en/latest/_downloads/796714545d73f0b52e921d885369323d/win-install.bat) script for installation.\n\n## Quick Start\n\nHere are some examples to get you started with EvoX:\n\n### Single-objective Optimization\n\nSolve the Ackley problem using the PSO algorithm:\n\n```python\nimport torch\nfrom evox.algorithms import PSO\nfrom evox.problems.numerical import Ackley\nfrom evox.workflows import StdWorkflow, EvalMonitor\n\n# torch.set_default_device(\"cuda\") # Uncomment this line if you want to use GPU by default\n\nalgorithm = PSO(pop_size=100, lb=-32 * torch.ones(10), ub=32 * torch.ones(10))\nproblem = Ackley()\nmonitor = EvalMonitor()\nworkflow = StdWorkflow(algorithm, problem, monitor)\nworkflow.init_step()\nfor i in range(100):\n    workflow.step()\n\nmonitor.plot() # or monitor.plot().show() if you are using headless mode\n```\n\n\u003cdetails\u003e\n  \u003csummary\u003eExample Output\u003c/summary\u003e\n\n  \u003cpicture\u003e\n    \u003csource type=\"image/avif\" srcset=\"docs/source/_static/1-single-objective-output.avif\"\u003e\n    \u003cimg src=\"docs/source/_static/1-single-objective-output.png\"\u003e\n  \u003c/picture\u003e\n\n\u003c/details\u003e\n\n### Multi-objective Optimization\n\nSolve the DTLZ2 problem using the RVEA algorithm:\n\n```python\nimport torch\nfrom evox.algorithms import RVEA\nfrom evox.problems.numerical import DTLZ2\nfrom evox.workflows import StdWorkflow, EvalMonitor\n\n# torch.set_default_device(\"cuda\") # Uncomment this line if you want to use GPU by default\n\nprob = DTLZ2(m=2)\npf = prob.pf()\nalgo = RVEA(\n    pop_size=100,\n    n_objs=2,\n    lb=-torch.zeros(12),\n    ub=torch.ones(12)\n)\nmonitor = EvalMonitor()\nworkflow = StdWorkflow(algo, prob, monitor)\nworkflow.init_step()\nfor i in range(100):\n    workflow.step()\n\nmonitor.plot() # or monitor.plot().show() if you are using headless mode\n```\n\n\u003cdetails\u003e\n  \u003csummary\u003eExample Output\u003c/summary\u003e\n\n  \u003cpicture\u003e\n    \u003csource type=\"image/avif\" srcset=\"docs/source/_static/2-multi-objective-output.avif\"\u003e\n    \u003cimg src=\"docs/source/_static/2-multi-objective-output.png\"\u003e\n  \u003c/picture\u003e\n\n\u003c/details\u003e\n\n### Neuroevolution\n\nEvolving a simple MLP model to solve the Brax HalfCheetah environment:\n\n```python\nimport torch\nimport torch.nn as nn\nfrom evox.algorithms import PSO\nfrom evox.problems.neuroevolution.brax import BraxProblem\nfrom evox.utils import ParamsAndVector\nfrom evox.workflows import EvalMonitor, StdWorkflow\n\n# torch.set_default_device(\"cuda\") # Uncomment this line if you want to use GPU by default\n\nclass SimpleMLP(nn.Module):\n    def __init__(self):\n        super().__init__()\n        # observation space is 17-dim, action space is 6-dim.\n        self.features = nn.Sequential(nn.Linear(17, 8), nn.Tanh(), nn.Linear(8, 6))\n\n    def forward(self, x):\n        return torch.tanh(self.features(x))\n\n# Initialize the MLP model\nmodel = SimpleMLP()\nadapter = ParamsAndVector(dummy_model=model)\n# Set the population size\nPOP_SIZE = 1024\n# Get the bound of the PSO algorithm\nmodel_params = dict(model.named_parameters())\npop_center = adapter.to_vector(model_params)\nlb = torch.full_like(pop_center, -5)\nub = torch.full_like(pop_center, 5)\n# Initialize the PSO, and you can also use any other algorithms\nalgorithm = PSO(pop_size=POP_SIZE, lb=lb, ub=ub)\n# Initialize the Brax problem\nproblem = BraxProblem(\n    policy=model,\n    env_name=\"halfcheetah\",\n    max_episode_length=1000,\n    num_episodes=3,\n    pop_size=POP_SIZE,\n)\n# set an monitor, and it can record the top 3 best fitnesses\nmonitor = EvalMonitor(topk=3)\n# Initiate an workflow\nworkflow = StdWorkflow(\n    algorithm=algorithm,\n    problem=problem,\n    monitor=monitor,\n    opt_direction=\"max\",\n    solution_transform=adapter,\n)\nworkflow.init_step()\nfor i in range(50):\n    workflow.step()\n\nmonitor.plot() # or monitor.plot().show() if you are using headless mode\n```\n\n\u003cdetails\u003e\n  \u003csummary\u003eExample Output\u003c/summary\u003e\n\n  \u003cpicture\u003e\n    \u003csource type=\"image/avif\" srcset=\"docs/source/_static/3-neuroevolution-output.avif\"\u003e\n    \u003cimg src=\"docs/source/_static/3-neuroevolution-output.gif\"\u003e\n  \u003c/picture\u003e\n\n\u003c/details\u003e\n\n\u003e [!NOTE]\n\u003e For comprehensive guidance, please visit our [Documentation](https://evox.readthedocs.io/en/latest/), where you'll find detailed installation steps, tutorials, practical examples, and complete API references.\n\n## Sister Projects\n- **EvoRL**: GPU-accelerated framework for Evolutionary Reinforcement Learning. Check out [here](https://github.com/EMI-Group/evorl).\n- **EvoGP**: GPU-accelerated framework for Genetic Programming. Check out [here](https://github.com/EMI-Group/evogp).\n- **EvoMO**: GPU-accelerated library for Evolutionary Multiobjective Optimization (EMO).Check out [here](https://github.com/EMI-Group/evomo).\n- **TensorNEAT**: Tensorized NeuroEvolution of Augmenting Topologies (NEAT) for GPU Acceleration. Check out [here](https://github.com/EMI-Group/tensorneat).\n- **TensorACO**: Tensorized Ant Colony Optimization (ACO) for GPU Acceleration. Check out [here](https://github.com/EMI-Group/tensoraco).\n- **EvoXBench**: A real-world benchmark platform for solving various optimization problems, such as Neural Architecture Search (NAS). It operates without the need for GPUs/PyTorch/TensorFlow and supports multiple programming environments. Check out [here](https://github.com/EMI-Group/evoxbench).\n\nStay tuned - more exciting developments are on the way!  ✨\n\n## Community \u0026 Support\n\n- Join discussions on the [GitHub Discussion Board](https://github.com/EMI-Group/evox/discussions).\n- Connect via [Discord](https://discord.gg/Vbtgcpy7G4) or QQ group (ID: 297969717).\n- Visit [Offical Website](https://evox.group/).\n\n## Citing EvoX\n\nIf EvoX contributes to your research, please cite it:\n\n```bibtex\n@article{evox,\n  title = {{EvoX}: {A} {Distributed} {GPU}-accelerated {Framework} for {Scalable} {Evolutionary} {Computation}},\n  author = {Huang, Beichen and Cheng, Ran and Li, Zhuozhao and Jin, Yaochu and Tan, Kay Chen},\n  journal = {IEEE Transactions on Evolutionary Computation},\n  year = 2024,\n  doi = {10.1109/TEVC.2024.3388550}\n}\n```\n\n## License Notice\n\nEvoX is licensed under the **GNU General Public License v3.0 (GPL-3.0)**. For full terms and conditions, please refer to the [LICENSE](./LICENSE) file.\n\n\u003c!--\n## Star History\n[![Star History Chart](https://api.star-history.com/svg?repos=EMI-Group/evox\u0026type=Date)](https://star-history.com/#EMI-Group/evox\u0026Date)\n--\u003e\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Femi-group%2Fevox","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Femi-group%2Fevox","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Femi-group%2Fevox/lists"}