https://github.com/tillahoffmann/summaries2
https://github.com/tillahoffmann/summaries2
approximate-bayesian-computation likelihood-free-inference simulation-based-inference summary-statistics
Last synced: 3 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/tillahoffmann/summaries2
- Owner: tillahoffmann
- License: bsd-3-clause
- Created: 2023-06-24T22:00:37.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2023-09-29T21:21:02.000Z (over 1 year ago)
- Last Synced: 2025-01-14T02:46:40.316Z (4 months ago)
- Topics: approximate-bayesian-computation, likelihood-free-inference, simulation-based-inference, summary-statistics
- Language: Python
- Homepage:
- Size: 171 KB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Citation: CITATION.cff
Awesome Lists containing this project
README
# Minimizing the Expected Posterior Entropy Yields Optimal Summaries [](https://github.com/onnela-lab/summaries/actions/workflows/main.yaml)
This repository contains code and data to reproduce the results presented in the manuscript [*Minimizing the Expected Posterior Entropy Yields Optimal Summaries*](https://doi.org/10.48550/arXiv.2206.02340).
Figures and tables can be regenerated by executing the following steps:
- Ensure a recent Python version is installed; this code has been tested with Python 3.10 on Ubuntu and macOS.
- Optionally, [create a new virtual environment](https://docs.python.org/3/library/venv.html#creating-virtual-environments).
- Install the Python requirements by executing `pip install -r requirements.txt` from the root directory of the repository.
- Install [CmdStan](https://mc-stan.org/users/interfaces/cmdstan) by executing `python -m cmdstanpy.install_cmdstan --version 2.31.0`. Other recent versions of CmdStan may also work but have not been tested.
- Optionally, verify the installation by executing `pytest -v`.
- Execute `cook exec "*:evaluation"` which will run all experiments and generate evaluation metrics which are saved at `workspace/[experiment name]/evaluation.csv`.
- Execute each of the Jupyter notebooks (saved as markdown files) in the `notebooks` folder to generate the figures.Results Structure
-----------------After running the experiments (see above), the `workspace` folder contains all results. It is structured as follows, and the folder structure is repeated for each experiment.
```python
benchmark-large # One folder for each experiment.
data # Train, validation, and test split as pickle files; other temp files may also be present.
test.pkl
train.pkl
validation.pkl
...
samples # (Approximate) posterior samples as pickle files.
[sampler configuration name].pkl
...
transformers # Trained transformers, e.g., posterior mean estimators, as pickle files.
[transformer configuration name]-[digits].pkl # One of three replications with diff. seeds.
[transformer configuration name].pkl # Best transformer amongst the three replications.
evaluation.csv # Evaluation of different summary statistic extraction methods.
benchmark-small
...
coalescent
...
tree-large
...
tree-large
...
figures # Contains PDF figures after executing notebooks.
```Each `evaluation.csv` file has seven columns:
- `path` which refers to one of the methods used to extract summaries.
- three columns `{nlp,rmise,mise}` which are best estimates of negative log probability loss, root mean integrated squared error, and mean integrated squared error, respectively. The estimates are obtained by averaging over all samples in the corresponding test set.
- three columns `{nlp,rmise,mise}_err` which are standard errors obtained as `sqrt(var / (n - 1))`, where `var` is the variance of the metric in the test set, and `n` is the size of the test set.