{"id":13521028,"url":"https://github.com/google-deepmind/optax","last_synced_at":"2026-04-01T20:32:55.300Z","repository":{"id":37030134,"uuid":"271834742","full_name":"google-deepmind/optax","owner":"google-deepmind","description":"Optax is a gradient processing and optimization library for JAX.","archived":false,"fork":false,"pushed_at":"2026-03-20T23:51:28.000Z","size":8965,"stargazers_count":2220,"open_issues_count":78,"forks_count":317,"subscribers_count":32,"default_branch":"main","last_synced_at":"2026-03-21T13:55:42.664Z","etag":null,"topics":["machine-learning","optimization"],"latest_commit_sha":null,"homepage":"https://optax.readthedocs.io","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/google-deepmind.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2020-06-12T15:45:35.000Z","updated_at":"2026-03-21T06:18:34.000Z","dependencies_parsed_at":"2023-09-07T20:34:57.101Z","dependency_job_id":"dba95125-b055-4fac-9c64-e542e14f51fc","html_url":"https://github.com/google-deepmind/optax","commit_stats":null,"previous_names":["google-deepmind/optax","deepmind/optax"],"tags_count":24,"template":false,"template_full_name":null,"purl":"pkg:github/google-deepmind/optax","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Foptax","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Foptax/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Foptax/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Foptax/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/google-deepmind","download_url":"https://codeload.github.com/google-deepmind/optax/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Foptax/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31291678,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-01T13:12:26.723Z","status":"ssl_error","status_checked_at":"2026-04-01T13:12:25.102Z","response_time":53,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["machine-learning","optimization"],"created_at":"2024-08-01T06:00:26.608Z","updated_at":"2026-04-01T20:32:55.273Z","avatar_url":"https://github.com/google-deepmind.png","language":"Python","readme":"# Optax\n\n![CI status](https://github.com/google-deepmind/optax/actions/workflows/tests.yml/badge.svg?branch=main)\n[![Documentation Status](https://readthedocs.org/projects/optax/badge/?version=latest)](http://optax.readthedocs.io)\n![pypi](https://img.shields.io/pypi/v/optax)\n\n## Introduction\n\nOptax is a gradient processing and optimization library for JAX.\n\nOptax is designed to facilitate research by providing building blocks\nthat can be easily recombined in custom ways.\n\nOur goals are to\n\n*   Provide simple, well-tested, efficient implementations of core components.\n*   Improve research productivity by enabling to easily combine low-level\n    ingredients into custom optimizers (or other gradient processing components).\n*   Accelerate adoption of new ideas by making it easy for anyone to contribute.\n\nWe favor focusing on small composable building blocks that can be effectively\ncombined into custom solutions. Others may build upon these basic components\nin more complicated abstractions. Whenever reasonable, implementations prioritize\nreadability and structuring code to match standard equations, over code reuse.\n\nAn initial prototype of this library was made available in JAX's experimental\nfolder as `jax.experimental.optix`. Given the wide adoption across DeepMind\nof `optix`, and after a few iterations on the API, `optix` was eventually moved\nout of `experimental` as a standalone open-source library, and renamed `optax`.\n\nDocumentation on Optax can be found at [optax.readthedocs.io](https://optax.readthedocs.io/).\n\n## Installation\n\nYou can install the latest released version of Optax from PyPI via:\n\n```sh\npip install optax\n```\n\nor you can install the latest development version from GitHub:\n\n```sh\npip install git+https://github.com/google-deepmind/optax.git\n```\n\n## Quickstart\n\nOptax contains implementations of [many popular optimizers](https://optax.readthedocs.io/en/latest/api/optimizers.html) and\n[loss functions](https://optax.readthedocs.io/en/latest/api/losses.html).\nFor example, the following code snippet uses the Adam optimizer from `optax.adam`\nand the mean squared error from `optax.l2_loss`. We initialize the optimizer\nstate using the `init` function and `params` of the model.\n\n```python\noptimizer = optax.adam(learning_rate)\n# Obtain the `opt_state` that contains statistics for the optimizer.\nparams = {'w': jnp.ones((num_weights,))}\nopt_state = optimizer.init(params)\n```\n\nTo write the update loop we need a loss function that can be differentiated by\nJax (with `jax.grad` in this\nexample) to obtain the gradients.\n\n```python\ncompute_loss = lambda params, x, y: optax.l2_loss(params['w'].dot(x), y)\ngrads = jax.grad(compute_loss)(params, xs, ys)\n```\n\nThe gradients are then converted via `optimizer.update` to obtain the updates\nthat should be applied to the current parameters to obtain the new ones.\n`optax.apply_updates` is a convenience utility to do this.\n\n```python\nupdates, opt_state = optimizer.update(grads, opt_state)\nparams = optax.apply_updates(params, updates)\n```\n\nYou can continue the quick start in [the Optax 🚀 Getting started notebook.](https://github.com/google-deepmind/optax/blob/main/docs/getting_started.ipynb)\n\n## Development\n\nWe welcome issues reports and pull requests solving issues or improving\nexisting functionalities. If you are interested in adding a feature like a new\noptimizer, **open an issue first**! We are focused on making optax more\nflexible, versatile and easy to use for you to define your own optimizers.\n\n### Source code\n\nYou can check the latest sources with the following command.\n\n```sh\ngit clone https://github.com/google-deepmind/optax.git\n```\n### Testing\n\nTo run the tests, please execute the following script.\n\n```sh\nsh test.sh\n```\n\n### Documentation\n\nTo build the documentation, first ensure that all the dependencies are installed.\n```sh\npip install -e \".[docs]\"\n```\nThen, execute the following.\n```sh\ncd docs\nmake html\n```\n\n### Benchmarking\n\nSome benchmarks\n\n- [Benchmarking Neural Network Training Algorithms, Dahl G. et al, 2023](https://arxiv.org/pdf/2306.07179),\n\n- [Descending through a Crowded Valley — Benchmarking Deep Learning Optimizers, Schmidt R. et al, 2021](https://proceedings.mlr.press/v139/schmidt21a).\n\nMaking your own benchmark\n\n- [Benchopt: Reproducible, efficient and collaborative optimization benchmarks, Moreau T. et al, 2022](https://arxiv.org/abs/2206.13424).\n\nOptimizer tuning handbook\n\n- [Deep Learning Tuning Playbook, Godbole V. et al, 2023](https://github.com/google-research/tuning_playbook).\n\n### Other optimization-adjacent libraries in JAX\n\n- [optimistix](https://github.com/patrick-kidger/optimistix): nonlinear solvers: root finding, minimisation, fixed points, and least squares.\n\n- [matfree](https://github.com/pnkraemer/matfree):\nmatrix free methods useful to study curvature dynamics in deep learning.\n\n## Citing Optax\n\nThis repository is part of the DeepMind JAX Ecosystem, to cite Optax\nplease use the citation:\n\n```bibtex\n@software{deepmind2020jax,\n  title = {The {D}eep{M}ind {JAX} {E}cosystem},\n  author = {DeepMind and Babuschkin, Igor and Baumli, Kate and Bell, Alison and Bhupatiraju, Surya and Bruce, Jake and Buchlovsky, Peter and Budden, David and Cai, Trevor and Clark, Aidan and Danihelka, Ivo and Dedieu, Antoine and Fantacci, Claudio and Godwin, Jonathan and Jones, Chris and Hemsley, Ross and Hennigan, Tom and Hessel, Matteo and Hou, Shaobo and Kapturowski, Steven and Keck, Thomas and Kemaev, Iurii and King, Michael and Kunesch, Markus and Martens, Lena and Merzic, Hamza and Mikulik, Vladimir and Norman, Tamara and Papamakarios, George and Quan, John and Ring, Roman and Ruiz, Francisco and Sanchez, Alvaro and Sartran, Laurent and Schneider, Rosalia and Sezener, Eren and Spencer, Stephen and Srinivasan, Srivatsan and Stanojevi\\'{c}, Milo\\v{s} and Stokowiec, Wojciech and Wang, Luyu and Zhou, Guangyao and Viola, Fabio},\n  url = {http://github.com/google-deepmind},\n  year = {2020},\n}\n```\n","funding_links":[],"categories":["See also: other libraries in the JAX ecosystem","Deep Learning","Python","Implementation in JAX","Libraries"],"sub_categories":["JAX","Other"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgoogle-deepmind%2Foptax","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fgoogle-deepmind%2Foptax","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgoogle-deepmind%2Foptax/lists"}