{"id":13471942,"url":"https://github.com/rtqichen/torchdiffeq","last_synced_at":"2025-05-12T05:33:52.436Z","repository":{"id":39339083,"uuid":"157588610","full_name":"rtqichen/torchdiffeq","owner":"rtqichen","description":"Differentiable ODE solvers with full GPU support and O(1)-memory backpropagation.","archived":false,"fork":false,"pushed_at":"2025-04-04T01:06:00.000Z","size":8554,"stargazers_count":5936,"open_issues_count":83,"forks_count":957,"subscribers_count":126,"default_branch":"master","last_synced_at":"2025-05-12T02:51:11.640Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/rtqichen.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":"CITATION.cff","codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2018-11-14T17:51:25.000Z","updated_at":"2025-05-11T13:48:36.000Z","dependencies_parsed_at":"2023-02-16T00:30:30.260Z","dependency_job_id":"8f518e8d-f037-4b81-bac7-4150ef43d0f9","html_url":"https://github.com/rtqichen/torchdiffeq","commit_stats":{"total_commits":192,"total_committers":23,"mean_commits":8.347826086956522,"dds":0.5520833333333333,"last_synced_commit":"64fbe9effdf49212610bcd3b8d3cf7e1516c76ae"},"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/rtqichen%2Ftorchdiffeq","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/rtqichen%2Ftorchdiffeq/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/rtqichen%2Ftorchdiffeq/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/rtqichen%2Ftorchdiffeq/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/rtqichen","download_url":"https://codeload.github.com/rtqichen/torchdiffeq/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253672735,"owners_count":21945482,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-07-31T16:00:50.549Z","updated_at":"2025-05-12T05:33:52.409Z","avatar_url":"https://github.com/rtqichen.png","language":"Python","readme":"# PyTorch Implementation of Differentiable ODE Solvers\n\nThis library provides ordinary differential equation (ODE) solvers implemented in PyTorch. Backpropagation through ODE solutions is supported using the adjoint method for constant memory cost. For usage of ODE solvers in deep learning applications, see reference [1].\n\nAs the solvers are implemented in PyTorch, algorithms in this repository are fully supported to run on the GPU.\n\n## Installation\n\nTo install latest stable version:\n```\npip install torchdiffeq\n```\n\nTo install latest on GitHub:\n```\npip install git+https://github.com/rtqichen/torchdiffeq\n```\n\n## Examples\nExamples are placed in the [`examples`](./examples) directory.\n\nWe encourage those who are interested in using this library to take a look at [`examples/ode_demo.py`](./examples/ode_demo.py) for understanding how to use `torchdiffeq` to fit a simple spiral ODE.\n\n\u003cp align=\"center\"\u003e\n\u003cimg align=\"middle\" src=\"./assets/ode_demo.gif\" alt=\"ODE Demo\" width=\"500\" height=\"250\" /\u003e\n\u003c/p\u003e\n\n## Basic usage\nThis library provides one main interface `odeint` which contains general-purpose algorithms for solving initial value problems (IVP), with gradients implemented for all main arguments. An initial value problem consists of an ODE and an initial value,\n```\ndy/dt = f(t, y)    y(t_0) = y_0.\n```\nThe goal of an ODE solver is to find a continuous trajectory satisfying the ODE that passes through the initial condition.\n\nTo solve an IVP using the default solver:\n```\nfrom torchdiffeq import odeint\n\nodeint(func, y0, t)\n```\nwhere `func` is any callable implementing the ordinary differential equation `f(t, x)`, `y0` is an _any_-D Tensor representing the initial values, and `t` is a 1-D Tensor containing the evaluation points. The initial time is taken to be `t[0]`.\n\nBackpropagation through `odeint` goes through the internals of the solver. Note that this is not numerically stable for all solvers (but should probably be fine with the default `dopri5` method). Instead, we encourage the use of the adjoint method explained in [1], which will allow solving with as many steps as necessary due to O(1) memory usage.\n\nTo use the adjoint method:\n```\nfrom torchdiffeq import odeint_adjoint as odeint\n\nodeint(func, y0, t)\n```\n`odeint_adjoint` simply wraps around `odeint`, but will use only O(1) memory in exchange for solving an adjoint ODE in the backward call.\n\nThe biggest **gotcha** is that `func` must be a `nn.Module` when using the adjoint method. This is used to collect parameters of the differential equation.\n\n## Differentiable event handling\n\nWe allow terminating an ODE solution based on an event function. Backpropagation through most solvers is supported. For usage of event handling in deep learning applications, see reference [2].\n\nThis can be invoked with `odeint_event`:\n```\nfrom torchdiffeq import odeint_event\nodeint_event(func, y0, t0, *, event_fn, reverse_time=False, odeint_interface=odeint, **kwargs)\n```\n - `func` and `y0` are the same as `odeint`.\n - `t0` is a scalar representing the initial time value.\n - `event_fn(t, y)` returns a tensor, and is a required keyword argument.\n - `reverse_time` is a boolean specifying whether we should solve in reverse time. Default is `False`.\n - `odeint_interface` is one of `odeint` or `odeint_adjoint`, specifying whether adjoint mode should be used for differentiating through the ODE solution. Default is `odeint`.\n - `**kwargs`: any remaining keyword arguments are passed to `odeint_interface`.\n\nThe solve is terminated at an event time `t` and state `y` when an element of `event_fn(t, y)` is equal to zero. Multiple outputs from `event_fn` can be used to specify multiple event functions, of which the first to trigger will terminate the solve.\n\nBoth the event time and final state are returned from `odeint_event`, and can be differentiated. Gradients will be backpropagated through the event function. **NOTE**: parameters for the event function must be in the state itself to obtain gradients. \n\nThe numerical precision for the event time is determined by the `atol` argument.\n\nSee example of simulating and differentiating through a bouncing ball in [`examples/bouncing_ball.py`](./examples/bouncing_ball.py). See example code for learning a simple event function in [`examples/learn_physics.py`](./examples/learn_physics.py).\n\n\u003cp align=\"center\"\u003e\n\u003cimg align=\"middle\" src=\"./assets/bouncing_ball.png\" alt=\"Bouncing Ball\" width=\"500\" height=\"250\" /\u003e\n\u003c/p\u003e\n\n## Keyword arguments for odeint(_adjoint)\n\n#### Keyword arguments:\n - `rtol` Relative tolerance.\n - `atol` Absolute tolerance.\n - `method` One of the solvers listed below.\n - `options` A dictionary of solver-specific options, see the [further documentation](FURTHER_DOCUMENTATION.md).\n\n#### List of ODE Solvers:\n\nAdaptive-step:\n - `dopri8` Runge-Kutta of order 8 of Dormand-Prince-Shampine.\n - `dopri5` Runge-Kutta of order 5 of Dormand-Prince-Shampine **[default]**.\n - `bosh3` Runge-Kutta of order 3 of Bogacki-Shampine.\n - `fehlberg2` Runge-Kutta-Fehlberg of order 2.\n - `adaptive_heun` Runge-Kutta of order 2.\n\nFixed-step:\n - `euler` Euler method.\n - `midpoint` Midpoint method.\n - `rk4` Fourth-order Runge-Kutta with 3/8 rule.\n - `explicit_adams` Explicit Adams-Bashforth.\n - `implicit_adams` Implicit Adams-Bashforth-Moulton.\n\nAdditionally, all solvers available through SciPy are wrapped for use with `scipy_solver`.\n\nFor most problems, good choices are the default `dopri5`, or to use `rk4` with `options=dict(step_size=...)` set appropriately small. Adjusting the tolerances (adaptive solvers) or step size (fixed solvers), will allow for trade-offs between speed and accuracy.\n\n## Frequently Asked Questions\nTake a look at our [FAQ](FAQ.md) for frequently asked questions.\n\n## Further documentation\nFor details of the adjoint-specific and solver-specific options, check out the [further documentation](FURTHER_DOCUMENTATION.md).\n\n## References\n\nApplications of differentiable ODE solvers and event handling are discussed in these two papers:\n\nRicky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, David Duvenaud. \"Neural Ordinary Differential Equations.\" *Advances in Neural Information Processing Systems.* 2018. [[arxiv]](https://arxiv.org/abs/1806.07366)\n\n```\n@article{chen2018neuralode,\n  title={Neural Ordinary Differential Equations},\n  author={Chen, Ricky T. Q. and Rubanova, Yulia and Bettencourt, Jesse and Duvenaud, David},\n  journal={Advances in Neural Information Processing Systems},\n  year={2018}\n}\n```\n\nRicky T. Q. Chen, Brandon Amos, Maximilian Nickel. \"Learning Neural Event Functions for Ordinary Differential Equations.\" *International Conference on Learning Representations.* 2021. [[arxiv]](https://arxiv.org/abs/2011.03902)\n\n```\n@article{chen2021eventfn,\n  title={Learning Neural Event Functions for Ordinary Differential Equations},\n  author={Chen, Ricky T. Q. and Amos, Brandon and Nickel, Maximilian},\n  journal={International Conference on Learning Representations},\n  year={2021}\n}\n```\n\nThe seminorm option for computing adjoints is discussed in\n\nPatrick Kidger, Ricky T. Q. Chen, Terry Lyons. \"'Hey, that’s not an ODE': Faster ODE Adjoints via Seminorms.\" *International Conference on Machine\nLearning.* 2021. [[arxiv]](https://arxiv.org/abs/2009.09457)\n```\n@article{kidger2021hey,\n  title={\"Hey, that's not an ODE\": Faster ODE Adjoints via Seminorms.},\n  author={Kidger, Patrick and Chen, Ricky T. Q. and Lyons, Terry J.},\n  journal={International Conference on Machine Learning},\n  year={2021}\n}\n```\n\n---\n\nIf you found this library useful in your research, please consider citing.\n```\n@misc{torchdiffeq,\n\tauthor={Chen, Ricky T. Q.},\n\ttitle={torchdiffeq},\n\tyear={2018},\n\turl={https://github.com/rtqichen/torchdiffeq},\n}\n```\n","funding_links":[],"categories":["Python","Software","Tools","其他_机器学习与深度学习","Pytorch \u0026 related libraries｜Pytorch \u0026 相关库","\u003cspan id=\"head56\"\u003e3.8. Scientific Machine Learning (Differential Equation and ML)\u003c/span\u003e","Software and Libraries","Additional Material","Linear Algebra / Statistics Toolkit","Pytorch \u0026 related libraries","Pytorch实用程序","🔬 Domain-Specific Applications"],"sub_categories":["Python","Physics-inspired","Other libraries｜其他库:","\u003cspan id=\"head57\"\u003e3.8.1. Universal Differential Equations. (Neural differential equations)\u003c/span\u003e","Software and Libraries","Others","Other libraries:","🌌 Physics \u0026 Astronomy"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Frtqichen%2Ftorchdiffeq","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Frtqichen%2Ftorchdiffeq","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Frtqichen%2Ftorchdiffeq/lists"}