{"id":39663136,"url":"https://github.com/zombie-einstein/esquilax","last_synced_at":"2026-01-18T09:25:52.732Z","repository":{"id":256291114,"uuid":"854812314","full_name":"zombie-einstein/esquilax","owner":"zombie-einstein","description":"JAX Multi-Agent RL, Neuro-Evolution, and A-Life Library","archived":false,"fork":false,"pushed_at":"2025-07-02T15:45:42.000Z","size":654,"stargazers_count":10,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-07-02T16:42:33.782Z","etag":null,"topics":["alife","jax","multi-agent","multi-agent-reinforcement-learning","multi-agent-simulation","multi-agent-systems","neuroevolution","reinforcement-learning","reinforcement-learning-environments","simulation"],"latest_commit_sha":null,"homepage":"https://zombie-einstein.github.io/esquilax/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/zombie-einstein.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE.txt","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2024-09-09T20:21:39.000Z","updated_at":"2025-07-02T15:43:00.000Z","dependencies_parsed_at":"2024-11-18T22:48:14.022Z","dependency_job_id":"9ef08e36-aeff-4b81-8362-87179312f7a7","html_url":"https://github.com/zombie-einstein/esquilax","commit_stats":null,"previous_names":["zombie-einstein/esquilax"],"tags_count":16,"template":false,"template_full_name":null,"purl":"pkg:github/zombie-einstein/esquilax","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zombie-einstein%2Fesquilax","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zombie-einstein%2Fesquilax/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zombie-einstein%2Fesquilax/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zombie-einstein%2Fesquilax/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/zombie-einstein","download_url":"https://codeload.github.com/zombie-einstein/esquilax/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zombie-einstein%2Fesquilax/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28534159,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-18T00:39:45.795Z","status":"online","status_checked_at":"2026-01-18T02:00:07.578Z","response_time":98,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["alife","jax","multi-agent","multi-agent-reinforcement-learning","multi-agent-simulation","multi-agent-systems","neuroevolution","reinforcement-learning","reinforcement-learning-environments","simulation"],"created_at":"2026-01-18T09:25:52.612Z","updated_at":"2026-01-18T09:25:52.699Z","avatar_url":"https://github.com/zombie-einstein.png","language":"Python","readme":"\u003cdiv align=\"center\"\u003e\n  \u003cimg src=\"https://github.com/zombie-einstein/esquilax/raw/main/.github/images/text_logo.png\" /\u003e\n  \u003cbr\u003e\n  \u003cem\u003eJAX Multi-Agent RL, A-Life, and Simulation Framework\u003c/em\u003e\n\u003c/div\u003e\n\u003cbr\u003e\n\nEsquilax is set of transformations and utilities\nintended to allow developers and researchers to\nquickly implement models of multi-agent systems\nfor rl-training, evolutionary methods, and a-life.\n\nIt is intended for systems involving large number of\nagents, and to work alongside other JAX packages\nlike [Flax](https://github.com/google/flax) and\n[Evosax](https://github.com/RobertTLange/evosax).\n\n**Full documentation can be found\n[here](https://zombie-einstein.github.io/esquilax/)**\n\n## Features\n\n- ***Built on top of JAX***\n\n  This has the benefits of JAX; high-performance, built in\n  GPU support etc., but also means Esquilax can interoperate\n  with existing JAX ML and RL libraries.\n\n- ***Interaction Algorithm Implementations***\n\n  Implements common agent interaction patterns. This\n  allows users to concentrate on model design instead of low-level\n  algorithm implementation details.\n\n- ***Scale and Performance***\n\n  JIT compilation and GPU support enables simulations and multi-agent\n  systems containing large numbers of agents whilst maintaining\n  performance and training throughput.\n\n- ***Functional Patterns***\n\n  Esquilax is designed around functional patterns, ensuring models\n  can be readily parallelised, but also aiding composition\n  and readability\n\n- ***Built-in RL and Evolutionary Training***\n\n  Esquilax provides functionality for running multi-agent RL\n  and multi-strategy neuro-evolution training, within Esquilax\n  simulations.\n\n## Should I Use Esquilax?\n\nEsquilax is intended for time-stepped models of large scale systems\nwith fixed numbers of entities, where state is updated in parallel.\nAs such you should probably *not* use Esquilax if:\n\n- You want to use something other than stepped updates, e.g.\n  continuous time, event driven models, or where agents are intended to\n  update in sequence.\n- You need variable numbers of entities or temporary entities, e.g.\n  message passing.\n- You need a high-fidelity physics/robotics simulation.\n\n## Getting Started\n\nEsquilax can be installed from pip using\n\n``` bash\npip install esquilax\n```\n\nThe requirements for evolutionary and rl training are\nnot installed by default. They can be installed using the `evo` and `rl`\nextras respectively, e.g.:\n\n```bash\npip install esquilax[evo]\n```\n\nYou may need to manually install JAXlib, especially for GPU support.\nInstallation instructions for JAX can be found\n[here](https://github.com/google/jax?tab=readme-ov-file#installation).\n\n## Examples\n\nExample models and multi-agent policy training implemented using Esquilax\ncan be found [here](https://github.com/zombie-einstein/esquilax/tree/main/examples). A virtual environment with additional\ndependencies for the examples can be setup using [poetry](https://python-poetry.org/)\nwith\n\n```bash\npoetry install --extras all --with examples\n```\n\nFor a project using Esquilax see\n[Floxs](https://github.com/zombie-einstein/floxs) a collection of multi-agent\nRL flock/swarm environments or\n[this](https://github.com/instadeepai/jumanji/tree/main/jumanji/environments/swarms/search_and_rescue)\nmulti-agent rl environment, part of the [Jumanji](https://github.com/instadeepai/jumanji)\nRL environment library.\n\n## Contributing\n\n### Issues\n\nPlease report any issues or feature suggestions\n[here](https://github.com/zombie-einstein/esquilax/issues).\n\n### Developers\n\nDeveloper notes can be found\n[here](https://github.com/zombie-einstein/esquilax/blob/main/.github/docs/developers.md),\nEsquilax is under active development and contributions are very welcome!\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fzombie-einstein%2Fesquilax","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fzombie-einstein%2Fesquilax","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fzombie-einstein%2Fesquilax/lists"}