Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/jbris/model-calibration-evaluation
Evaluating model calibration methods for sensitivity analysis, uncertainty analysis, optimisation, and Bayesian inference
https://github.com/jbris/model-calibration-evaluation
approximate-bayesian-computation bayesian-optimization bayesian-statistics bolfi deep-learning differential-evolution-mcmc generative-neural-network global-optimization kriging likelihood-free-inference optuna polynomial-chaos polynomial-chaos-expansion pymc sensitivity-analysis shuffled-complex-evolution simulation-based-inference sobol-indices surrogate-models uncertainty-analysis
Last synced: 3 days ago
JSON representation
Evaluating model calibration methods for sensitivity analysis, uncertainty analysis, optimisation, and Bayesian inference
- Host: GitHub
- URL: https://github.com/jbris/model-calibration-evaluation
- Owner: JBris
- License: mit
- Created: 2023-07-11T08:28:17.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-02-03T15:09:29.000Z (10 months ago)
- Last Synced: 2024-02-04T14:27:21.330Z (10 months ago)
- Topics: approximate-bayesian-computation, bayesian-optimization, bayesian-statistics, bolfi, deep-learning, differential-evolution-mcmc, generative-neural-network, global-optimization, kriging, likelihood-free-inference, optuna, polynomial-chaos, polynomial-chaos-expansion, pymc, sensitivity-analysis, shuffled-complex-evolution, simulation-based-inference, sobol-indices, surrogate-models, uncertainty-analysis
- Language: Python
- Homepage: https://jbris.github.io/model-calibration-evaluation/
- Size: 3.92 MB
- Stars: 5
- Watchers: 4
- Forks: 0
- Open Issues: 5
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
README
# model-calibration-evaluation
[![pages-build-deployment](https://github.com/JBris/model-calibration-evaluation/actions/workflows/pages/pages-build-deployment/badge.svg?branch=main)](https://github.com/JBris/model-calibration-evaluation/actions/workflows/pages/pages-build-deployment)
[![CodeQL](https://github.com/JBris/model-calibration-evaluation/actions/workflows/github-code-scanning/codeql/badge.svg?branch=main)](https://github.com/JBris/model-calibration-evaluation/actions/workflows/github-code-scanning/codeql)Evaluating model calibration methods for sensitivity analysis, uncertainty analysis, optimisation, and Bayesian inference
See [config.yaml](config.yaml) for the ground-truth simulation parameters.
The following model calibration methods have been evaluated.
* [Approximate Bayesian Computation - Sequential Monte Carlo](https://github.com/JBris/model-calibration-evaluation/tree/main/pipelines/abc_smc/run.py)
* [Bayesian Optimisation](https://github.com/JBris/model-calibration-evaluation/tree/main/pipelines/bayes_opt/run.py)
* [Bayesian Optimisation for Likelihood-Free Inference](https://github.com/JBris/model-calibration-evaluation/tree/main/pipelines/bolfi/run.py)
* [Differential Evolution Adaptive Metropolis](https://github.com/JBris/model-calibration-evaluation/tree/main/pipelines/dream/run.py)
* [Experimental Design via Gaussian Process Emulation](https://github.com/JBris/model-calibration-evaluation/tree/main/pipelines/experimental_design/run.py)
* [Flow Matching Posterior Estimation](https://github.com/JBris/model-calibration-evaluation/tree/main/pipelines/fmpe/run.py)
* [Tree-structured Parzen Estimator](https://github.com/JBris/model-calibration-evaluation/tree/main/pipelines/optimisation/run.py)
* [Polynomial Chaos Expansion](https://github.com/JBris/model-calibration-evaluation/tree/main/pipelines/poly_chaos/run.py)
* [Polynomial Chaos Kriging](https://github.com/JBris/model-calibration-evaluation/tree/main/pipelines/poly_chaos_kriging/run.py)
* [Sparse Axis-Aligned Subspace Bayesian Optimization](https://github.com/JBris/model-calibration-evaluation/tree/main/pipelines/saasbo/run.py)
* [Shuffled Complex Evolution Algorithm Uncertainty Analysis](https://github.com/JBris/model-calibration-evaluation/tree/main/pipelines/sceua/run.py)
* [Sequential Neural Posterior Estimation](https://github.com/JBris/model-calibration-evaluation/tree/main/pipelines/snpe/run.py)
* [Sobol Sensitivity Analysis](https://github.com/JBris/model-calibration-evaluation/tree/main/pipelines/sobol_sa/run.py)
* [Truncated Marginal Neural Ratio Estimation](https://github.com/JBris/model-calibration-evaluation/tree/main/pipelines/tmnre/run.py)