Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/lxuechen/bdmc
PyTorch implementation of Bidirectional Monte Carlo, Annealed Importance Sampling, and Hamiltonian Monte Carlo.
https://github.com/lxuechen/bdmc
annealed-importance-sampling bayesian-inference bidirectional-monte-carlo hamiltonian-monte-carlo mcmc pytorch
Last synced: 20 days ago
JSON representation
PyTorch implementation of Bidirectional Monte Carlo, Annealed Importance Sampling, and Hamiltonian Monte Carlo.
- Host: GitHub
- URL: https://github.com/lxuechen/bdmc
- Owner: lxuechen
- License: mit
- Created: 2018-01-06T20:45:10.000Z (almost 7 years ago)
- Default Branch: master
- Last Pushed: 2021-05-14T21:39:59.000Z (over 3 years ago)
- Last Synced: 2024-10-14T11:09:50.603Z (about 1 month ago)
- Topics: annealed-importance-sampling, bayesian-inference, bidirectional-monte-carlo, hamiltonian-monte-carlo, mcmc, pytorch
- Language: Python
- Homepage:
- Size: 3.05 MB
- Stars: 52
- Watchers: 3
- Forks: 14
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# BDMC
PyTorch implementation of Bidirectional Monte Carlo.
## Requirements
* `python3`
* `numpy`
* `pytorch>=0.4.1`
* `tqdm`## What is Bidirectional Monte Carlo (BDMC)?
BDMC is a method of accurately sandwiching the log marginal likelihood (ML). It is mainly used to evaluate the quality
of log-ML estimators [1]. The method achieves this by obtaining a lower bound with the usual Annealed Importance
Sampling (AIS) [2], and an upper bound with Reversed AIS from an exact posterior sample. Since the upper bound requires
an *exact* sample from the posterior, the method is only strictly valid on simulated data. However, the results obtained
on simulated data can help verify the performance of log-ML estimators. Conditioned upon the assumption that the real
data does not differ too much from the simulated data, the evaluation of the log-ML estimator on simulated data could be
informative of the performance on real data.The given implementation performs evaluation on a variational autoencoder (VAE) trained on MNIST.
## To run
There is a pretrained VAE model (on MNIST) in the `checkpoints` folder. Executing the command
```bash
python bdmc.py \
--latent-dim 50 \
--batch-size 512 \
--n-batch 2 \
--chain-length 10000 \
--iwae-samples 10 \
--ckpt-path ./checkpoints/model.pth
```will start the forward and backwards chain of BDMC based on the model loaded from the pretrained checkpoint.
## References
[1] Grosse, Roger B., Zoubin Ghahramani, and Ryan P. Adams. "Sandwiching the marginal likelihood using bidirectional
Monte Carlo." arXiv preprint arXiv:1511.02543 (2015).[2] Neal, Radford M. "Annealed importance sampling." Statistics and computing 11.2 (2001): 125-139.
[3] Neal, Radford M. "MCMC using Hamiltonian dynamics." Handbook of Markov Chain Monte Carlo 2.11 (2011).