Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/tillahoffmann/pytest-bootstrap


https://github.com/tillahoffmann/pytest-bootstrap

bootstrap pytest scientific-computing

Last synced: 27 days ago
JSON representation

Awesome Lists containing this project

README

        

๐Ÿงช pytest-bootstrap
===================

.. image:: https://github.com/tillahoffmann/pytest-bootstrap/actions/workflows/main.yml/badge.svg
:target: https://github.com/tillahoffmann/pytest-bootstrap/actions/workflows/main.yml
:alt: Build status.
.. image:: https://readthedocs.org/projects/pytest-bootstrap/badge/?version=latest
:target: https://pytest-bootstrap.readthedocs.io/en/latest/?badge=latest
:alt: Documentation status.
.. image:: https://img.shields.io/pypi/v/pytest-bootstrap
:target: https://pypi.org/project/pytest-bootstrap/
:alt: PyPI package version.

Scientific software development often relies on stochasticity, e.g. for `Monte Carlo `_ integration or simulating the `Ising model `_. Testing non-deterministic code is difficult. This package offers a bootstrap test to validate stochastic algorithms, including multiple hypothesis correction for vector statistics. It can be installed by running :code:`pip install pytest-bootstrap`.

Example
-------

Suppose we want to implement the expected value of `log-normal distribution `_ with location parameter :math:`\mu` and scale parameter :math:`\sigma`.

.. doctest::

>>> import numpy as np
>>>
>>> def lognormal_expectation(mu, sigma):
... return np.exp(mu + sigma ** 2 / 2)
>>>
>>> def lognormal_expectation_wrong(mu, sigma):
... return np.exp(mu + sigma ** 2)

We can validate our implementation by simulating from a lognormal distribution and comparing with the bootstrapped mean.

.. doctest::

>>> from pytest_bootstrap import bootstrap_test
>>>
>>> mu = -1
>>> sigma = 1
>>> reference = lognormal_expectation(mu, sigma)
>>> x = np.exp(np.random.normal(mu, sigma, 1000))
>>> result = bootstrap_test(x, np.mean, reference)

This returns a summary of the test, such as the bootstrapped statistics.

.. doctest::

>>> result.keys()
dict_keys(['alpha', 'alpha_corrected', 'reference', 'lower', 'upper',
'z_score', 'median', 'iqr', 'tol', 'statistics'])

Comparing with our incorrect implementation reveals the bug.

.. doctest::

>>> reference_wrong = lognormal_expectation_wrong(mu, sigma)
>>> result = bootstrap_test(x, np.mean, reference_wrong)
Traceback (most recent call last):
...
pytest_bootstrap.BootstrapTestError: the reference value 1.0 lies outside
the 1 - (alpha = 0.01) interval ...

Visualising the bootstrapped distribution using :func:`pytest_bootstrap.result_hist` can help identify discrepancies between the bootstrapped statistics and the theoretical reference value. Note that you need to install matplotlib separately or install pytest-bootstrap using :code:`pip install pytest-bootstrap[plot]`.

.. plot:: examples/lognormal.py
:caption: Histogram of bootstrapped means reveals the erroneous implementation of the log-normal mean.

A comprehensive set of examples can be found in `the tests `_.

Interface
---------

.. automodule:: pytest_bootstrap
:members: