https://github.com/rickstaa/stable-learning-control
A framework for training theoretically stable (and robust) Reinforcement Learning control algorithms.
https://github.com/rickstaa/stable-learning-control
artificial-intelligence control deep-learning framework gaussian-networks gymnasium machine-learning neural-networks openai-gym reinforcement-learning reinforcement-learning-agents reinforcement-learning-algorithms robustness simulation stability
Last synced: about 2 months ago
JSON representation
A framework for training theoretically stable (and robust) Reinforcement Learning control algorithms.
- Host: GitHub
- URL: https://github.com/rickstaa/stable-learning-control
- Owner: rickstaa
- License: mit
- Created: 2020-06-13T10:42:26.000Z (over 5 years ago)
- Default Branch: main
- Last Pushed: 2024-03-24T18:46:44.000Z (over 1 year ago)
- Last Synced: 2024-05-01T16:39:21.731Z (over 1 year ago)
- Topics: artificial-intelligence, control, deep-learning, framework, gaussian-networks, gymnasium, machine-learning, neural-networks, openai-gym, reinforcement-learning, reinforcement-learning-agents, reinforcement-learning-algorithms, robustness, simulation, stability
- Language: Python
- Homepage: https://rickstaa.dev/stable-learning-control
- Size: 46.9 MB
- Stars: 4
- Watchers: 4
- Forks: 1
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Citation: CITATION.cff
Awesome Lists containing this project
README
# Stable Learning Control
[](https://github.com/rickstaa/stable-learning-control/actions/workflows/stable_learning_control.yml)
[](https://github.com/rickstaa/stable-learning-control/releases)
[](https://www.python.org/)
[](https://codecov.io/gh/rickstaa/stable-learning-control)
[](CONTRIBUTING.md)
[](https://zenodo.org/badge/latestdoi/271989240)
[](https://wandb.ai/rickstaa/stable-learning-control)
## Package Overview
The Stable Learning Control (SLC) framework is a collection of robust Reinforcement Learning control algorithms designed to ensure stability. These algorithms are built upon the Lyapunov actor-critic architecture introduced by [Han et al. 2020](https://arxiv.org/abs/2004.14288). They guarantee stability and robustness by leveraging [Lyapunov stability theory](https://en.wikipedia.org/wiki/Lyapunov_stability). These algorithms are specifically tailored for use with [gymnasium environments](https://gymnasium.farama.org/) that feature a positive definite cost function. Several ready-to-use compatible environments can be found in the [stable-gym](https://github.com/rickstaa/stable-gym) package.
## Installation and Usage
Please see the [docs](https://rickstaa.github.io/stable-learning-control/) for installation and usage instructions.
## Contributing
We use [husky](https://github.com/typicode/husky) pre-commit hooks and github actions to enforce high code quality. Please check the [contributing guidelines](CONTRIBUTING.md) before contributing to this repository.
> \[!NOTE]\
> We used [husky](https://github.com/typicode/husky) instead of [pre-commit](https://pre-commit.com/), which is more commonly used with Python projects. This was done because only some tools we wanted to use were possible to integrate the Please feel free to open a [PR](https://github.com/rickstaa/stable-learning-control/pulls) if you want to switch to pre-commit if this is no longer the case.
## References
* [Han et al. 2020](https://arxiv.org/abs/2004.14288) - Used as a basis for the Lyapunov actor-critic architecture.
* [Spinningup](https://spinningup.openai.com/en/latest/) - Used as a basis for the code structure.