{"id":28092637,"url":"https://github.com/nmichlo/ruck","last_synced_at":"2025-10-26T01:13:49.160Z","repository":{"id":55752568,"uuid":"408803820","full_name":"nmichlo/ruck","owner":"nmichlo","description":"🧬 Modularised Evolutionary Algorithms For Python with Optional JIT and Multiprocessing (Ray) support. Inspired by PyTorch Lightning","archived":false,"fork":false,"pushed_at":"2023-03-29T08:23:06.000Z","size":109,"stargazers_count":53,"open_issues_count":2,"forks_count":2,"subscribers_count":4,"default_branch":"main","last_synced_at":"2025-07-01T11:04:39.617Z","etag":null,"topics":["evolutionary-algorithms","genetic-algorithms","multiobjective-optimization","multiprocessing","nsga-ii","numba","numpy","python","python3","raylib"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/nmichlo.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2021-09-21T12:04:27.000Z","updated_at":"2024-12-03T04:05:18.000Z","dependencies_parsed_at":"2025-05-13T13:48:32.046Z","dependency_job_id":"58f31c71-57c4-4b1b-b871-ae666ce59f1e","html_url":"https://github.com/nmichlo/ruck","commit_stats":null,"previous_names":[],"tags_count":7,"template":false,"template_full_name":null,"purl":"pkg:github/nmichlo/ruck","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nmichlo%2Fruck","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nmichlo%2Fruck/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nmichlo%2Fruck/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nmichlo%2Fruck/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/nmichlo","download_url":"https://codeload.github.com/nmichlo/ruck/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nmichlo%2Fruck/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":262950293,"owners_count":23389638,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["evolutionary-algorithms","genetic-algorithms","multiobjective-optimization","multiprocessing","nsga-ii","numba","numpy","python","python3","raylib"],"created_at":"2025-05-13T13:24:53.835Z","updated_at":"2025-10-26T01:13:49.084Z","avatar_url":"https://github.com/nmichlo.png","language":"Python","readme":"\n\u003cp align=\"center\"\u003e\n    \u003ch1 align=\"center\"\u003e🧬 Ruck 🏉\u003c/h1\u003e\n    \u003cp align=\"center\"\u003e\n        \u003ci\u003ePerformant evolutionary algorithms for Python\u003c/i\u003e\n    \u003c/p\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n    \u003ca href=\"https://choosealicense.com/licenses/mit/\"\u003e\n        \u003cimg alt=\"license\" src=\"https://img.shields.io/github/license/nmichlo/ruck?style=flat-square\u0026color=lightgrey\"/\u003e\n    \u003c/a\u003e\n    \u003ca href=\"https://pypi.org/project/ruck\"\u003e\n        \u003cimg alt=\"python versions\" src=\"https://img.shields.io/pypi/pyversions/ruck?style=flat-square\"/\u003e\n    \u003c/a\u003e\n    \u003ca href=\"https://pypi.org/project/ruck\"\u003e\n        \u003cimg alt=\"pypi version\" src=\"https://img.shields.io/pypi/v/ruck?style=flat-square\u0026color=blue\"/\u003e\n    \u003c/a\u003e\n    \u003ca href=\"https://github.com/nmichlo/ruck/actions?query=workflow%3Atest\"\u003e\n        \u003cimg alt=\"tests status\" src=\"https://github.com/nmichlo/ruck/actions/workflows/python-test.yml/badge.svg\"/\u003e\n    \u003c/a\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n    \u003cp align=\"center\"\u003e\n        Visit the \u003ca href=\"https://github.com/nmichlo/ruck/tree/main/examples/\"\u003eexamples\u003c/a\u003e to get started, or browse the  \u003ca href=\"https://github.com/nmichlo/ruck/releases\"\u003ereleases\u003c/a\u003e.\n    \u003c/p\u003e\n    \u003cp align=\"center\"\u003e\n        \u003ca href=\"https://github.com/nmichlo/ruck/issues/new/choose\"\u003eContributions\u003c/a\u003e are welcome!\n    \u003c/p\u003e\n\u003c/p\u003e\n\n------------------------\n\n## Goals\n\nRuck aims to fill the following criteria:\n\n1. Provide **high quality**, **readable** implementations of algorithms.\n2. Be easily **extensible** and **debuggable**.\n3. Performant while maintaining its simplicity.\n\n## Features\n\nRuck has various features that will be expanded upon in time\n- 📦 \u0026nbsp; Modular evolutionary systems inspired by pytorch lightning\n  + Helps organise code \u0026 arguably looks clean\n- 🎯 \u0026nbsp; Multi-Objective optimisation support\n  + Optionally optimised version of NSGA-II if `numba` is installed, over 65x faster than the DEAP equivalent\n- 🏎 \u0026nbsp; Optional multithreading support with `ray`, including helper functions\n- 🏭 \u0026nbsp; Factory methods for simple evolutionary algorithms\n- 🧪 \u0026nbsp; Various helper functions for selection, mutation and mating\n\n\n## Citing Ruck\n\nPlease use the following citation if you use Ruck in your research:\n\n```bibtex\n@Misc{Michlo2021Ruck,\n  author =       {Nathan Juraj Michlo},\n  title =        {Ruck - Performant evolutionary algorithms for Python},\n  howpublished = {Github},\n  year =         {2021},\n  url =          {https://github.com/nmichlo/ruck}\n}\n```\n\n## Overview\n\nRuck takes inspiration from PyTorch Lightning's module system. The population creation,\noffspring, evaluation and selection steps are all contained within a single module inheriting\nfrom `EaModule`. While the training logic and components are separated into its own class.\n\n`Members` of a `Population` (A list of Members) are intended to be read-only. Modifications should not\nbe made to members, instead new members should be created with the modified values. This enables us to\neasily implement efficient multi-threading, see below!\n\nThe trainer automatically constructs `HallOfFame` and `LogBook` objects which keep track of your\npopulation and offspring. `EaModule` provides defaults for `get_stats_groups` and `get_progress_stats`\nthat can be overridden if you wish to customize the tracked statistics and statistics displayed by tqdm.\n\n\n### Minimal OneMax Example\n\n```python\nimport random\nimport numpy as np\nfrom ruck import *\n\n\nclass OneMaxMinimalModule(EaModule):\n    \"\"\"\n    Minimal onemax example\n    - The goal is to flip all the bits of a boolean array to True\n    - Offspring are generated as bit flipped versions of the previous population\n    - Selection tournament is performed between the previous population and the offspring\n    \"\"\"\n\n    # evaluate unevaluated members according to their values\n    def evaluate_values(self, values):\n        return [v.sum() for v in values]\n\n    # generate 300 random members of size 100 with 50% bits flipped\n    def gen_starting_values(self):\n        return [np.random.random(100) \u003c 0.5 for _ in range(300)]\n\n    # randomly flip 5% of the bits of each each member in the population\n    # the previous population members should never be modified\n    def generate_offspring(self, population):\n        return [Member(m.value ^ (np.random.random(m.value.shape) \u003c 0.05)) for m in population]\n\n    # selection tournament between population and offspring\n    def select_population(self, population, offspring):\n        combined = population + offspring\n        return [max(random.sample(combined, k=3), key=lambda m: m.fitness) for _ in range(len(population))]\n\n\nif __name__ == '__main__':\n    # create and train the population\n    module = OneMaxMinimalModule()\n    pop, logbook, halloffame = Trainer(generations=100, progress=True).fit(module)\n\n    print('initial stats:', logbook[0])\n    print('final stats:', logbook[-1])\n    print('best member:', halloffame.members[0])\n```\n\n### Advanced OneMax Example\n\nRuck provides various helper functions and implementations of evolutionary algorithms for convenience.\nThe following example makes use of these additional features so that components and behaviour can\neasily be swapped out.\n\nThe three basic evolutionary algorithms provided are based around [deap's](http://www.github.com/deap/deap)\ndefault algorithms from `deap.algorithms`. These basic evolutionary algorithms can be created from\n`ruck.functional.make_ea`. We provide the alias `ruck.R` for `ruck.functional`. `R.make_ea` supports\nthe following modes: `simple`, `mu_plus_lambda` and `mu_comma_lambda`.\n\n\n\u003cdetails\u003e\u003csummary\u003e\u003cb\u003eCode Example\u003c/b\u003e\u003c/summary\u003e\n\u003cp\u003e\n\n```python\n\"\"\"\nOneMax serial example based on:\nhttps://github.com/DEAP/deap/blob/master/examples/ga/onemax_numpy.py\n\"\"\"\n\nimport functools\nimport numpy as np\nfrom ruck import *\n\n\nclass OneMaxModule(EaModule):\n\n    def __init__(\n        self,\n        population_size: int = 300,\n        offspring_num: int = None,  # offspring_num (lambda) is automatically set to population_size (mu) when `None`\n        member_size: int = 100,\n        p_mate: float = 0.5,\n        p_mutate: float = 0.5,\n        ea_mode: str = 'simple'\n    ):\n        # save the arguments to the .hparams property. values are taken from the\n        # local scope so modifications can be captured if the call to this is delayed.\n        self.save_hyperparameters()\n        # implement the required functions for `EaModule`\n        self.generate_offspring, self.select_population = R.make_ea(\n            mode=self.hparams.ea_mode,\n            offspring_num=self.hparams.offspring_num,\n            mate_fn=R.mate_crossover_1d,\n            mutate_fn=functools.partial(R.mutate_flip_bit_groups, p=0.05),\n            select_fn=functools.partial(R.select_tournament, k=3),\n            p_mate=self.hparams.p_mate,\n            p_mutate=self.hparams.p_mutate,\n        )\n\n    def evaluate_values(self, values):\n        return map(np.sum, values)\n\n    def gen_starting_values(self) -\u003e Population:\n        return [\n            np.random.random(self.hparams.member_size) \u003c 0.5\n            for i in range(self.hparams.population_size)\n        ]\n\n\nif __name__ == '__main__':\n    # create and train the population\n    module = OneMaxModule(population_size=300, member_size=100)\n    pop, logbook, halloffame = Trainer(generations=40, progress=True).fit(module)\n\n    print('initial stats:', logbook[0])\n    print('final stats:', logbook[-1])\n    print('best member:', halloffame.members[0])\n```\n\n\u003c/p\u003e\n\u003c/details\u003e\n\n### Multithreading OneMax Example (Ray)\n\nIf we need to scale up the computational requirements, for example requiring increased\nmember and population sizes, the above serial implementations will soon run into performance problems.\n\nThe basic Ruck implementations of various evolutionary algorithms are designed around a `map`\nfunction that can be swapped out to add multi-threading support. We can easily do this using\n[ray](https://github.com/ray-project/ray) and we even provide various helper functions that\nenhance ray support.\n\n1. We begin by placing member's values into shared memory using ray's read-only object store\nand the `ray.put` function. These [ObjectRef's](https://docs.ray.io/en/latest/memory-management.html)\nvalues point to the original `np.ndarray` values. When retrieved with `ray.get` they obtain the original\narrays using an efficient zero-copy procedure. This is advantageous over something like Python's multiprocessing module which uses\nexpensive pickle operations to pass data around.\n\n2. The second step is to swap out the aforementioned `map` function in the previous example to a\nmultiprocessing equivalent. We use `ray.remote` along with `ray.get`, and provide the `ray_map` function\nthat has the same API as python map, but accepts `ray.remote(my_fn).remote` values instead.\n\n3. Finally we need to update our `mate` and `mutate` functions to handle `ObjectRef`s, we provide a convenient\nwrapper to automatically call `ray.put` on function results so that you can re-use your existing code.\n\n\u003cdetails\u003e\u003csummary\u003e\u003cb\u003eCode Example\u003c/b\u003e\u003c/summary\u003e\n\u003cp\u003e\n\n```python\n\"\"\"\nOneMax parallel example using ray's object store.\n\n8 bytes * 1_000_000 * 128 members ~= 128 MB of memory to store this population.\nThis is quite a bit of processing that needs to happen! But using ray\nand its object store we can do this efficiently!\n\"\"\"\n\nfrom functools import partial\nimport numpy as np\nfrom ruck import *\nfrom ruck.external.ray import *\n\n\nclass OneMaxRayModule(EaModule):\n\n    def __init__(\n        self,\n        population_size: int = 300,\n        offspring_num: int = None,  # offspring_num (lambda) is automatically set to population_size (mu) when `None`\n        member_size: int = 100,\n        p_mate: float = 0.5,\n        p_mutate: float = 0.5,\n        ea_mode: str = 'mu_plus_lambda'\n    ):\n        self.save_hyperparameters()\n        # implement the required functions for `EaModule`\n        self.generate_offspring, self.select_population = R.make_ea(\n            mode=self.hparams.ea_mode,\n            offspring_num=self.hparams.offspring_num,\n            # decorate the functions with `ray_remote_put` which automatically\n            # `ray.get` arguments that are `ObjectRef`s and `ray.put`s returned results\n            mate_fn=ray_remote_puts(R.mate_crossover_1d).remote,\n            mutate_fn=ray_remote_put(R.mutate_flip_bit_groups).remote,\n            # efficient to compute locally\n            select_fn=partial(R.select_tournament, k=3),\n            p_mate=self.hparams.p_mate,\n            p_mutate=self.hparams.p_mutate,\n            # ENABLE multiprocessing\n            map_fn=ray_map,\n        )\n        # eval function, we need to cache it on the class to prevent\n        # multiple calls to ray.remote. We use ray.remote instead of\n        # ray_remote_put like above because we want the returned values\n        # not object refs to those values.\n        self._ray_eval = ray.remote(np.mean).remote\n\n    def evaluate_values(self, values):\n        # values is a list of `ray.ObjectRef`s not `np.ndarray`s\n        # ray_map automatically converts np.sum to a `ray.remote` function which\n        # automatically handles `ray.get`ing of `ray.ObjectRef`s passed as arguments\n        return ray_map(self._ray_eval, values)\n\n    def gen_starting_values(self):\n        # generate objects and place in ray's object store\n        return [\n            ray.put(np.random.random(self.hparams.member_size) \u003c 0.5)\n            for i in range(self.hparams.population_size)\n        ]\n\n\nif __name__ == '__main__':\n    # initialize ray to use the specified system resources\n    ray.init()\n\n    # create and train the population\n    module = OneMaxRayModule(population_size=128, member_size=1_000_000)\n    pop, logbook, halloffame = Trainer(generations=200, progress=True).fit(module)\n\n    print('initial stats:', logbook[0])\n    print('final stats:', logbook[-1])\n    print('best member:', halloffame.members[0])\n```\n\n\u003c/p\u003e\n\u003c/details\u003e\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnmichlo%2Fruck","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fnmichlo%2Fruck","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnmichlo%2Fruck/lists"}