{"id":13687614,"url":"https://github.com/probml/dynamax","last_synced_at":"2025-05-14T08:09:31.432Z","repository":{"id":37083816,"uuid":"480587737","full_name":"probml/dynamax","owner":"probml","description":"A Python package for probabilistic state space modeling with JAX","archived":false,"fork":false,"pushed_at":"2025-05-10T11:24:18.000Z","size":252676,"stargazers_count":815,"open_issues_count":56,"forks_count":94,"subscribers_count":25,"default_branch":"main","last_synced_at":"2025-05-10T11:35:45.980Z","etag":null,"topics":["hidden-markov-models","jax","kalman-filter","python","state-space-models"],"latest_commit_sha":null,"homepage":"https://probml.github.io/dynamax/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/probml.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":"CITATION.cff","codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2022-04-11T23:42:29.000Z","updated_at":"2025-05-10T11:12:49.000Z","dependencies_parsed_at":"2023-02-16T02:00:59.142Z","dependency_job_id":"958e0766-ce90-4657-aeb2-1a57b4aae9e0","html_url":"https://github.com/probml/dynamax","commit_stats":{"total_commits":1198,"total_committers":32,"mean_commits":37.4375,"dds":0.7679465776293823,"last_synced_commit":"51b7dc5440ff25df731958a12ffbba75a1380001"},"previous_names":[],"tags_count":14,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/probml%2Fdynamax","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/probml%2Fdynamax/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/probml%2Fdynamax/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/probml%2Fdynamax/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/probml","download_url":"https://codeload.github.com/probml/dynamax/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254050966,"owners_count":22006391,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["hidden-markov-models","jax","kalman-filter","python","state-space-models"],"created_at":"2024-08-02T15:00:57.530Z","updated_at":"2025-05-14T08:09:26.417Z","avatar_url":"https://github.com/probml.png","language":"Python","readme":"# Welcome to DYNAMAX!\n\n![Logo](https://raw.githubusercontent.com/probml/dynamax/main/logo/logo.gif)\n\n![Test Status](https://github.com/probml/dynamax/actions/workflows/run_tests.yml/badge.svg?branch=main)\n![Docstrings](https://github.com/probml/dynamax/actions/workflows/interrogate.yml/badge.svg)\n[![DOI](https://joss.theoj.org/papers/10.21105/joss.07069/status.svg)](https://doi.org/10.21105/joss.07069)\n\nDynamax is a library for probabilistic state space models (SSMs) written\nin [JAX](https://github.com/google/jax). It has code for inference\n(state estimation) and learning (parameter estimation) in a variety of\nSSMs, including:\n\n-   Hidden Markov Models (HMMs)\n-   Linear Gaussian State Space Models (aka Linear Dynamical Systems)\n-   Nonlinear Gaussian State Space Models\n-   Generalized Gaussian State Space Models (with non-Gaussian emission\n    models)\n\nThe library consists of a set of core, functionally pure, low-level\ninference algorithms, as well as a set of model classes which provide a\nmore user-friendly, object-oriented interface. It is compatible with\nother libraries in the JAX ecosystem, such as\n[optax](https://github.com/deepmind/optax) (used for estimating\nparameters using stochastic gradient descent), and\n[Blackjax](https://github.com/blackjax-devs/blackjax) (used for\ncomputing the parameter posterior using Hamiltonian Monte Carlo (HMC) or\nsequential Monte Carlo (SMC)).\n\n## Documentation\n\nFor tutorials and API documentation, see: https://probml.github.io/dynamax/.\n\nFor an extension of dynamax that supports structural time series models, \nsee https://github.com/probml/sts-jax.\n\nFor an illustration of how to use dynamax inside of [bayeux](https://jax-ml.github.io/bayeux/) to perform Bayesian inference\nfor the parameters of an SSM, see https://jax-ml.github.io/bayeux/examples/dynamax_and_bayeux/.\n\n## Installation and Testing\n\nTo install the latest releast of dynamax from PyPi:\n\n``` {.console}\npip install dynamax                 # Install dynamax and core dependencies, or\npip install dynamax[notebooks]      # Install with demo notebook dependencies\n```\n\nTo install the latest development branch:\n\n``` {.console}\npip install git+https://github.com/probml/dynamax.git\n```\n\nFinally, if you\\'re a developer, you can install dynamax along with the\ntest and documentation dependencies with:\n\n``` {.console}\ngit clone git@github.com:probml/dynamax.git\ncd dynamax\npip install -e '.[dev]'\n```\n\nTo run the tests:\n\n``` {.console}\npytest dynamax                         # Run all tests\npytest dynamax/hmm/inference_test.py   # Run a specific test\npytest -k lgssm                        # Run tests with lgssm in the name\n```\n\n## What are state space models?\n\nA state space model or SSM is a partially observed Markov model, in\nwhich the hidden state, $z_t$, evolves over time according to a Markov\nprocess, possibly conditional on external inputs / controls /\ncovariates, $u_t$, and generates an observation, $y_t$. This is\nillustrated in the graphical model below.\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://raw.githubusercontent.com/probml/dynamax/main/docs/figures/LDS-UZY.png\"\u003e\n\u003c/p\u003e\n\nThe corresponding joint distribution has the following form (in dynamax,\nwe restrict attention to discrete time systems):\n\n$$p(y_{1:T}, z_{1:T} \\mid u_{1:T}) = p(z_1 \\mid u_1) \\prod_{t=2}^T p(z_t \\mid z_{t-1}, u_t) \\prod_{t=1}^T p(y_t \\mid z_t, u_t)$$\n\nHere $p(z_t | z_{t-1}, u_t)$ is called the transition or dynamics model,\nand $p(y_t | z_{t}, u_t)$ is called the observation or emission model.\nIn both cases, the inputs $u_t$ are optional; furthermore, the\nobservation model may have auto-regressive dependencies, in which case\nwe write $p(y_t | z_{t}, u_t, y_{1:t-1})$.\n\nWe assume that we see the observations $y_{1:T}$, and want to infer the\nhidden states, either using online filtering (i.e., computing\n$p(z_t|y_{1:t})$ ) or offline smoothing (i.e., computing\n$p(z_t|y_{1:T})$ ). We may also be interested in predicting future\nstates, $p(z_{t+h}|y_{1:t})$, or future observations,\n$p(y_{t+h}|y_{1:t})$, where h is the forecast horizon. (Note that by\nusing a hidden state to represent the past observations, the model can\nhave \\\"infinite\\\" memory, unlike a standard auto-regressive model.) All\nof these computations can be done efficiently using our library, as we\ndiscuss below. In addition, we can estimate the parameters of the\ntransition and emission models, as we discuss below.\n\nMore information can be found in these books:\n\n\u003e -   \\\"Machine Learning: Advanced Topics\\\", K. Murphy, MIT Press 2023.\n\u003e     Available at \u003chttps://probml.github.io/pml-book/book2.html\u003e.\n\u003e -   \\\"Bayesian Filtering and Smoothing, Second Edition\\\", S. Särkkä and L. Svensson, Cambridge\n\u003e     University Press, 2023. Available at\n\u003e     \u003chttp://users.aalto.fi/~ssarkka/pub/bfs_book_2023_online.pdf\u003e\n\n## Example usage\n\nDynamax includes classes for many kinds of SSM. You can use these models\nto simulate data, and you can fit the models using standard learning\nalgorithms like expectation-maximization (EM) and stochastic gradient\ndescent (SGD). Below we illustrate the high level (object-oriented) API\nfor the case of an HMM with Gaussian emissions. (See [this\nnotebook](https://github.com/probml/dynamax/blob/main/docs/notebooks/hmm/gaussian_hmm.ipynb)\nfor a runnable version of this code.)\n\n```python\nimport jax.numpy as jnp\nimport jax.random as jr\nimport matplotlib.pyplot as plt\nfrom dynamax.hidden_markov_model import GaussianHMM\n\nkey1, key2, key3 = jr.split(jr.PRNGKey(0), 3)\nnum_states = 3\nemission_dim = 2\nnum_timesteps = 1000\n\n# Make a Gaussian HMM and sample data from it\nhmm = GaussianHMM(num_states, emission_dim)\ntrue_params, _ = hmm.initialize(key1)\ntrue_states, emissions = hmm.sample(true_params, key2, num_timesteps)\n\n# Make a new Gaussian HMM and fit it with EM\nparams, props = hmm.initialize(key3, method=\"kmeans\", emissions=emissions)\nparams, lls = hmm.fit_em(params, props, emissions, num_iters=20)\n\n# Plot the marginal log probs across EM iterations\nplt.plot(lls)\nplt.xlabel(\"EM iterations\")\nplt.ylabel(\"marginal log prob.\")\n\n# Use fitted model for posterior inference\npost = hmm.smoother(params, emissions)\nprint(post.smoothed_probs.shape) # (1000, 3)\n```\n\nJAX allows you to easily vectorize these operations with `vmap`.\nFor example, you can sample and fit to a batch of emissions as shown below.\n\n```python\nfrom functools import partial\nfrom jax import vmap\n\nnum_seq = 200\nbatch_true_states, batch_emissions = \\\n    vmap(partial(hmm.sample, true_params, num_timesteps=num_timesteps))(\n        jr.split(key2, num_seq))\nprint(batch_true_states.shape, batch_emissions.shape) # (200,1000) and (200,1000,2)\n\n# Make a new Gaussian HMM and fit it with EM\nparams, props = hmm.initialize(key3, method=\"kmeans\", emissions=batch_emissions)\nparams, lls = hmm.fit_em(params, props, batch_emissions, num_iters=20)\n```\n\nThese examples demonstrate the dynamax models, but we can also call the low-level\ninference code directly.\n\n## Contributing\n\nPlease see [this page](https://github.com/probml/dynamax/blob/main/CONTRIBUTING.md) for details\non how to contribute.\n\n## About\nCore team: Peter Chang, Giles Harper-Donnelly, Aleyna Kara, Xinglong Li, Scott Linderman, Kevin Murphy.\n\nOther contributors: Adrien Corenflos, Elizabeth DuPre, Gerardo Duran-Martin, Colin Schlager, Libby Zhang and other people [listed here](https://github.com/probml/dynamax/graphs/contributors)\n\nMIT License. 2022\n","funding_links":[],"categories":["Python","Libraries"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fprobml%2Fdynamax","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fprobml%2Fdynamax","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fprobml%2Fdynamax/lists"}