Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/joerihermans/amortized-experimental-design
A demonstration on how to enable active sciencing by amortizing Bayesian experimental design.
https://github.com/joerihermans/amortized-experimental-design
Last synced: 7 days ago
JSON representation
A demonstration on how to enable active sciencing by amortizing Bayesian experimental design.
- Host: GitHub
- URL: https://github.com/joerihermans/amortized-experimental-design
- Owner: JoeriHermans
- License: bsd-3-clause
- Created: 2021-09-12T09:15:19.000Z (over 3 years ago)
- Default Branch: master
- Last Pushed: 2021-09-12T10:19:52.000Z (over 3 years ago)
- Last Synced: 2024-10-28T21:49:03.678Z (about 2 months ago)
- Language: Jupyter Notebook
- Size: 5.49 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/JoeriHermans/amortized-experimental-design/master)
## The idea
Which **experimental configuration** yields the largest expected gain in information?
The utility can be expressed as the exected reduction in entropy between the prior and the posterior, i.e.,.
This quantity is especially challenging because the posterior is intractable in most practical applications. We can however, draw samples from the likelihood model (our simulator).
We seek to obtain the experimental configuration which maximizes the utility:**Problem**: for every evaluation of the utility, the simulator needs to be called because the
expectation depends on . Slow and cumbersome!**Proposal:** Reweigh the samples from the joint to approximate with several ratio estimators that can be trained on samples from the joint alone!
In doing so, we can estimate the EIG for every experimental configuration by reusing presimulated samples for specific experimental configurations. In addition, this specification allows us to maximize the expected information gain through gradient ascent by simply backpropagating through the neural networks. For more details check the notebook!## License
See LICENSE file.