{"id":13510836,"url":"https://github.com/google-deepmind/enn","last_synced_at":"2025-06-17T00:40:00.755Z","repository":{"id":41840805,"uuid":"380959510","full_name":"google-deepmind/enn","owner":"google-deepmind","description":null,"archived":false,"fork":false,"pushed_at":"2025-03-05T03:00:44.000Z","size":1480,"stargazers_count":309,"open_issues_count":16,"forks_count":61,"subscribers_count":13,"default_branch":"master","last_synced_at":"2025-03-30T17:44:49.644Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/google-deepmind.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-06-28T08:25:35.000Z","updated_at":"2025-03-29T13:05:03.000Z","dependencies_parsed_at":"2023-09-07T20:34:47.417Z","dependency_job_id":"273711d6-971a-4968-bf15-f6ec9d6171fa","html_url":"https://github.com/google-deepmind/enn","commit_stats":null,"previous_names":["google-deepmind/enn","deepmind/enn"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/google-deepmind/enn","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Fenn","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Fenn/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Fenn/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Fenn/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/google-deepmind","download_url":"https://codeload.github.com/google-deepmind/enn/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Fenn/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":260268635,"owners_count":22983601,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-01T02:01:55.980Z","updated_at":"2025-06-17T00:40:00.727Z","avatar_url":"https://github.com/google-deepmind.png","language":"Python","funding_links":[],"categories":["Python","others"],"sub_categories":[],"readme":"# Epistemic Neural Networks\n\n\u003e A library for neural networks that know what they don't know.\n\nFor background information, please see the [paper]\n\n## Introduction\n\nConventional neural networks generate *marginal* predictions: given one input, they predict one label.\nIf a neural network outputs probability 50:50 it remains unclear if that is because of genuine ambiguity in the input, or just because the neural network has insufficient training data.\nThese two possibilities would be distinguished by *joint* predictions: given multiple inputs, predict multiple labels.\n\n![rabbit or duck](statics/images/rabbit_duck.png)\n\n\nAn epistemic neural network (ENN) makes predictions given a single input `x`, but also an epistemic index `z`.\nThe ENN controls the index `z` and uses it to produce joint predictions over multiple inputs `x_1,..,x_t` which may be different from just the product of marginals.\n\n![nn diagrams](statics/images/enn_diagrams.png)\n\nAn ENN provides a general interface for thinking about uncertainty estimation in deep learning.\nNote that, all existing approaches to uncertainty modeling, such as Bayesian neural networks (BNNs), can be expressed as ENNs.\nHowever, there are ENN architectures that are not natural to express and BNNs.\nThis library provides interfaces and tools for the design and training of ENNs.\n\n\n## Technical overview\n\n\u003e The `enn` library provides a lightweight interface for ENNs implemented on top of [JAX](https://github.com/google/jax) and [Haiku](https://github.com/deepmind/dm-haiku).\nIf you want to use our `enn` library, we highly recommend you start by familiarizing yourself with these libraries first.\n\n\nWe outline the key high-level interfaces for our code in [base.py](enn/base.py):\n\n- `EpistemicNetwork`: a convenient pairing of Haiku transformed + index sampler.\n  - `apply`: haiku-style apply function taking `params, x, z -\u003e f_params(x,z)``\n  - `init`: haiku-style init function taking `key, x, z -\u003e params_init`\n  - `indexer`: generates a sample from the reference index distribution taking `key -\u003e z`.\n- `LossFn`: Given an ENN, parameters, and data: how to compute a loss.\n  - Takes: `enn, params, batch, key`\n  - Outputs: `loss, metrics`\n\nWe then use these high-level concepts to build and train ENNs.\n\n\n## Getting started\n\nYou can get started in our [colab tutorial] without installing anything on your\nmachine.\n\n\n### Installation\n\nWe have tested `ENN` on Python 3.7. To install the dependencies:\n\n1.  **Optional**: We recommend using a\n    [Python virtual environment](https://docs.python.org/3/tutorial/venv.html)\n    to manage your dependencies, so as not to clobber your system installation:\n\n    ```bash\n    python3 -m venv enn\n    source enn/bin/activate\n    pip install --upgrade pip setuptools\n    ```\n\n2.  Install `ENN` directly from [github](https://github.com/deepmind/enn):\n\n    ```bash\n    pip install git+https://github.com/deepmind/enn\n    ```\n3.  Test that you can load `ENN` by training a simple ensemble ENN.\n\n    ```python\n    from enn.loggers import TerminalLogger\n\n    from enn import losses\n    from enn import networks\n    from enn import supervised\n    from enn.supervised import regression_data\n    import optax\n\n    # A small dummy dataset\n    dataset = regression_data.make_dataset()\n\n    # Logger\n    logger = TerminalLogger('supervised_regression')\n\n    # ENN\n    enn = networks.MLPEnsembleMatchedPrior(\n        output_sizes=[50, 50, 1],\n        num_ensemble=10,\n    )\n\n    # Loss\n    loss_fn = losses.average_single_index_loss(\n        single_loss=losses.L2LossWithBootstrap(),\n        num_index_samples=10\n    )\n\n    # Optimizer\n    optimizer = optax.adam(1e-3)\n\n    # Train the experiment\n    experiment = supervised.Experiment(\n        enn, loss_fn, optimizer, dataset, seed=0, logger=logger)\n    experiment.train(FLAGS.num_batch)\n    ```\n\n4. **Optional**: run the tests by executing `./test.sh` from ENN root directory.\n\n\n## Epinet\n\nOne of the key contributions of our [paper] is the *epinet*: a new ENN architecture that can supplement any conventional NN and be trained to estimate uncertainty.\n\n\nAn epinet is a neural network with privileged access to inputs and outputs of activation units in the base network.\nA subset of these inputs and outputs, denoted by $\\phi_\\zeta(x)$, are taken as input to the epinet along with an epistemic index $z$.\nFor epinet parameters $\\eta$, the epinet outputs $\\sigma_\\eta(\\phi_\\zeta(x), z)$.\nTo produce an ENN, the output of the epinet is added to that of the base network, though with a \"stop gradient\" written $[[\\cdot]]$:\n\n$$ f_\\theta(x, z) = \\mu_\\zeta(x) + \\sigma_\\eta([[\\phi_\\zeta(x)]], z). $$\n\n\nWe can visualize this network architecture:\n\n![epinet diagram](statics/images/epinet_new.png)\n\nAs part of our release include an [epinet colab] that loads in a pre-trained base network and epinet on ImageNet.\nThe core network logic for epinet is available in [networks/epinet](enn/networks/epinet/README.md).\n\n\n\n## Citing\n\nIf you use `ENN` in your work, please cite the accompanying [paper]:\n\n```bibtex\n@article{osband2022epistemic,\n  title={Epistemic neural networks},\n  author={Osband, Ian and Wen, Zheng and Asghari, Seyed Mohammad and Dwaracherla, Vikranth and Ibrahimi, Morteza and Lu, Xiuyuan and Van Roy, Benjamin},\n  journal={arXiv preprint arXiv:2107.08924},\n  year={2022}\n}\n```\n\n[colab tutorial]: https://colab.research.google.com/github/deepmind/enn/blob/master/enn/colabs/enn_demo.ipynb\n[epinet colab]: https://colab.research.google.com/github/deepmind/enn/blob/master/enn/colabs/epinet_demo.ipynb\n[paper]: https://arxiv.org/abs/2107.08924\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgoogle-deepmind%2Fenn","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fgoogle-deepmind%2Fenn","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgoogle-deepmind%2Fenn/lists"}