{"id":13595132,"url":"https://github.com/google/flax","last_synced_at":"2026-03-04T21:03:23.943Z","repository":{"id":37093874,"uuid":"233016299","full_name":"google/flax","owner":"google","description":"Flax is a neural network library for JAX that is designed for flexibility.","archived":false,"fork":false,"pushed_at":"2025-05-04T07:26:42.000Z","size":26101,"stargazers_count":6527,"open_issues_count":390,"forks_count":695,"subscribers_count":85,"default_branch":"main","last_synced_at":"2025-05-05T14:53:56.760Z","etag":null,"topics":["jax"],"latest_commit_sha":null,"homepage":"https://flax.readthedocs.io","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/google.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"contributing.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":"AUTHORS","dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2020-01-10T09:48:37.000Z","updated_at":"2025-05-04T10:56:07.000Z","dependencies_parsed_at":"2022-08-08T19:15:30.663Z","dependency_job_id":"d218760b-f599-4a55-8754-ca53cbb73f0c","html_url":"https://github.com/google/flax","commit_stats":{"total_commits":2454,"total_committers":215,"mean_commits":"11.413953488372092","dds":0.867563162184189,"last_synced_commit":"b3945f84b87602629f2ec7adad2bec69982fed2e"},"previous_names":[],"tags_count":50,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google%2Fflax","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google%2Fflax/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google%2Fflax/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google%2Fflax/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/google","download_url":"https://codeload.github.com/google/flax/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253777317,"owners_count":21962664,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["jax"],"created_at":"2024-08-01T16:01:44.497Z","updated_at":"2026-03-04T21:03:23.932Z","avatar_url":"https://github.com/google.png","language":"Jupyter Notebook","readme":"\u003cdiv align=\"center\"\u003e\n\u003cimg src=\"https://raw.githubusercontent.com/google/flax/main/images/flax_logo_250px.png\" alt=\"logo\"\u003e\u003c/img\u003e\n\u003c/div\u003e\n\n# Flax: A neural network library and ecosystem for JAX designed for flexibility\n\n[![Flax - Test](https://github.com/google/flax/actions/workflows/flax_test.yml/badge.svg)](https://github.com/google/flax/actions/workflows/flax_test.yml)\n[![PyPI version](https://img.shields.io/pypi/v/flax)](https://pypi.org/project/flax/)\n\n[**Overview**](#overview)\n| [**Quick install**](#quick-install)\n| [**What does Flax look like?**](#what-does-flax-look-like)\n| [**Documentation**](https://flax.readthedocs.io/)\n\nReleased in 2024, Flax NNX is a new simplified Flax API that is designed to make\nit easier to create, inspect, debug, and analyze neural networks in\n[JAX](https://jax.readthedocs.io/). It achieves this by adding first class support\nfor Python reference semantics. This allows users to express their models using\nregular Python objects, enabling reference sharing and mutability.\n\nFlax NNX evolved from the [Flax Linen API](https://flax-linen.readthedocs.io/), which\nwas released in 2020 by engineers and researchers at Google Brain in close collaboration\nwith the JAX team.\n\nYou can learn more about Flax NNX on the [dedicated Flax documentation site](https://flax.readthedocs.io/). Make sure you check out:\n\n* [Flax NNX basics](https://flax.readthedocs.io/en/latest/nnx_basics.html)\n* [MNIST tutorial](https://flax.readthedocs.io/en/latest/mnist_tutorial.html)\n* [Why Flax NNX](https://flax.readthedocs.io/en/latest/why.html)\n* [Evolution from Flax Linen to Flax NNX](https://flax.readthedocs.io/en/latest/guides/linen_to_nnx.html)\n\n**Note:** Flax Linen's [documentation has its own site](https://flax-linen.readthedocs.io/).\n\nThe Flax team's mission is to serve the growing JAX neural network\nresearch ecosystem - both within Alphabet and with the broader community,\nand to explore the use-cases where JAX shines. We use GitHub for almost\nall of our coordination and planning, as well as where we discuss\nupcoming design changes. We welcome feedback on any of our discussion,\nissue and pull request threads.\n\nYou can make feature requests, let us know what you are working on,\nreport issues, ask questions in our [Flax GitHub discussion\nforum](https://github.com/google/flax/discussions).\n\nWe expect to improve Flax, but we don't anticipate significant\nbreaking changes to the core API. We use [Changelog](https://github.com/google/flax/tree/main/CHANGELOG.md)\nentries and deprecation warnings when possible.\n\nIn case you want to reach us directly, we're at flax-dev@google.com.\n\n## Overview\n\nFlax is a high-performance neural network library and ecosystem for\nJAX that is **designed for flexibility**:\nTry new forms of training by forking an example and by modifying the training\nloop, not adding features to a framework.\n\nFlax is being developed in close collaboration with the JAX team and\ncomes with everything you need to start your research, including:\n\n* **Neural network API** (`flax.nnx`): Including [`Linear`](https://flax.readthedocs.io/en/latest/api_reference/flax.nnx/nn/linear.html#flax.nnx.Linear), [`Conv`](https://flax.readthedocs.io/en/latest/api_reference/flax.nnx/nn/linear.html#flax.nnx.Conv), [`BatchNorm`](https://flax.readthedocs.io/en/latest/api_reference/flax.nnx/nn/normalization.html#flax.nnx.BatchNorm), [`LayerNorm`](https://flax.readthedocs.io/en/latest/api_reference/flax.nnx/nn/normalization.html#flax.nnx.LayerNorm), [`GroupNorm`](https://flax.readthedocs.io/en/latest/api_reference/flax.nnx/nn/normalization.html#flax.nnx.GroupNorm), [Attention](https://flax.readthedocs.io/en/latest/api_reference/flax.nnx/nn/attention.html) ([`MultiHeadAttention`](https://flax.readthedocs.io/en/latest/api_reference/flax.nnx/nn/attention.html#flax.nnx.MultiHeadAttention)), [`LSTMCell`](https://flax.readthedocs.io/en/latest/api_reference/flax.nnx/nn/recurrent.html#flax.nnx.nn.recurrent.LSTMCell), [`GRUCell`](https://flax.readthedocs.io/en/latest/api_reference/flax.nnx/nn/recurrent.html#flax.nnx.nn.recurrent.GRUCell), [`Dropout`](https://flax.readthedocs.io/en/latest/api_reference/flax.nnx/nn/stochastic.html#flax.nnx.Dropout).\n\n* **Utilities and patterns**: replicated training, serialization and checkpointing, metrics, prefetching on device.\n\n* **Educational examples**: [MNIST](https://flax.readthedocs.io/en/latest/mnist_tutorial.html), [Inference/sampling with the Gemma language model (transformer)](https://github.com/google/flax/tree/main/examples/gemma).\n\n## Quick install\n\nFlax uses JAX, so do check out [JAX installation instructions on CPUs, GPUs and TPUs](https://jax.readthedocs.io/en/latest/installation.html).\n\nYou will need Python 3.8 or later. Install Flax from PyPi:\n\n```\npip install flax\n```\n\nTo upgrade to the latest version of Flax, you can use:\n\n```\npip install --upgrade git+https://github.com/google/flax.git\n```\n\nTo install some additional dependencies (like `matplotlib`) that are required but not included\nby some dependencies, you can use:\n\n```bash\npip install \"flax[all]\"\n```\n\n## What does Flax look like?\n\nWe provide three examples using the Flax API: a simple multi-layer perceptron, a CNN and an auto-encoder.\n\nTo learn more about the `Module` abstraction, check out our [docs](https://flax.readthedocs.io/), our [broad intro to the Module abstraction](https://github.com/google/flax/blob/main/docs/linen_intro.ipynb). For additional concrete demonstrations of best practices, refer to our\n[guides](https://flax.readthedocs.io/en/latest/guides/index.html) and\n[developer notes](https://flax.readthedocs.io/en/latest/developer_notes/index.html).\n\nExample of an MLP:\n\n```py\nclass MLP(nnx.Module):\n  def __init__(self, din: int, dmid: int, dout: int, *, rngs: nnx.Rngs):\n    self.linear1 = nnx.Linear(din, dmid, rngs=rngs)\n    self.dropout = nnx.Dropout(rate=0.1, rngs=rngs)\n    self.bn = nnx.BatchNorm(dmid, rngs=rngs)\n    self.linear2 = nnx.Linear(dmid, dout, rngs=rngs)\n\n  def __call__(self, x: jax.Array):\n    x = nnx.gelu(self.dropout(self.bn(self.linear1(x))))\n    return self.linear2(x)\n```\n\nExample of a CNN:\n\n```py\nclass CNN(nnx.Module):\n  def __init__(self, *, rngs: nnx.Rngs):\n    self.conv1 = nnx.Conv(1, 32, kernel_size=(3, 3), rngs=rngs)\n    self.conv2 = nnx.Conv(32, 64, kernel_size=(3, 3), rngs=rngs)\n    self.avg_pool = partial(nnx.avg_pool, window_shape=(2, 2), strides=(2, 2))\n    self.linear1 = nnx.Linear(3136, 256, rngs=rngs)\n    self.linear2 = nnx.Linear(256, 10, rngs=rngs)\n\n  def __call__(self, x):\n    x = self.avg_pool(nnx.relu(self.conv1(x)))\n    x = self.avg_pool(nnx.relu(self.conv2(x)))\n    x = x.reshape(x.shape[0], -1)  # flatten\n    x = nnx.relu(self.linear1(x))\n    x = self.linear2(x)\n    return x\n```\n\nExample of an autoencoder:\n\n\n```py\nEncoder = lambda rngs: nnx.Linear(2, 10, rngs=rngs)\nDecoder = lambda rngs: nnx.Linear(10, 2, rngs=rngs)\n\nclass AutoEncoder(nnx.Module):\n  def __init__(self, rngs):\n    self.encoder = Encoder(rngs)\n    self.decoder = Decoder(rngs)\n\n  def __call__(self, x) -\u003e jax.Array:\n    return self.decoder(self.encoder(x))\n\n  def encode(self, x) -\u003e jax.Array:\n    return self.encoder(x)\n```\n\n## Citing Flax\n\nTo cite this repository:\n\n```\n@software{flax2020github,\n  author = {Jonathan Heek and Anselm Levskaya and Avital Oliver and Marvin Ritter and Bertrand Rondepierre and Andreas Steiner and Marc van {Z}ee},\n  title = {{F}lax: A neural network library and ecosystem for {JAX}},\n  url = {http://github.com/google/flax},\n  version = {0.12.4},\n  year = {2024},\n}\n```\n\nIn the above bibtex entry, names are in alphabetical order, the version number\nis intended to be that from [flax/version.py](https://github.com/google/flax/blob/main/flax/version.py), and the year corresponds to the project's open-source release.\n\n## Note\n\nFlax is an open source project maintained by a dedicated team at Google DeepMind, but is not an official Google product.\n\n","funding_links":[],"categories":["Jupyter Notebook","Deep Learning","机器学习框架","Machine Learning Framework","Python","其他_机器学习与深度学习","Libraries","Computation and Communication Optimisation","Researchers","Data Science"],"sub_categories":["JAX","General Purpose Framework","Frameworks","Machine Learning"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgoogle%2Fflax","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fgoogle%2Fflax","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgoogle%2Fflax/lists"}