https://github.com/marfvr/yarllib
Yet Another Reinforcement Learning Library.
https://github.com/marfvr/yarllib
gym gym-environment library machine-learning openai-gym python-library reinforcement-learning reinforcement-learning-algorithm reinforcement-learning-algorithms
Last synced: 29 days ago
JSON representation
Yet Another Reinforcement Learning Library.
- Host: GitHub
- URL: https://github.com/marfvr/yarllib
- Owner: marfvr
- License: lgpl-3.0
- Created: 2020-07-15T08:34:05.000Z (about 5 years ago)
- Default Branch: master
- Last Pushed: 2022-06-02T14:01:32.000Z (over 3 years ago)
- Last Synced: 2025-09-09T16:05:54.224Z (about 1 month ago)
- Topics: gym, gym-environment, library, machine-learning, openai-gym, python-library, reinforcement-learning, reinforcement-learning-algorithm, reinforcement-learning-algorithms
- Language: Python
- Homepage: https://marcofavorito.github.io/yarllib/
- Size: 787 KB
- Stars: 4
- Watchers: 2
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Changelog: HISTORY.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
Awesome Lists containing this project
README
yarllib
Yet Another Reinforcement Learning Library.
Status: **development**.
## Why?
I had the need for a RL library/framework that:
- was clearly and simply implemented, with good enough performances;
- highly focused on modularity, customizability and extendability;
- wasn't merely Deep Reinforcement Learning oriented.I couldn't find an existing library that satisfied my needs;
hence I decided to implement _yet another_ RL library.For me it is also an opportunity to
have a better understanding of the RL algorithms
and to appreciate the nuances that you can't find on a book.If you find this repo useful for your research or your project,
I'd be very glad :-) don't hesitate to reach me out!## What
The package is both:
- a _library_, because it provides off-the-shelf functionalities to
set up an RL experiment;
- a _framework_, because you can compose your custom model by implementing
the interfaces, override the default behaviours, or use the existing
components as-is.You can find more details in the
[documentation](https://marcofavorito.github.io/yarllib).## Tests
To run tests: `tox`
To run only the code tests: `tox -e py3.7`
To run only the linters:
- `tox -e flake8`
- `tox -e mypy`
- `tox -e black-check`
- `tox -e isort-check`Please look at the `tox.ini` file for the full list of supported commands.
## Docs
To build the docs: `mkdocs build`
To view documentation in a browser: `mkdocs serve`
and then go to [http://localhost:8000](http://localhost:8000)## License
yarllib is released under the GNU Lesser General Public License v3.0 or later (LGPLv3+).
Copyright 2020 Marco Favorito
## Authors
- [Marco Favorito](https://marcofavorito.github.io/)
## Cite
If you use this library for your research, please consider citing this repository:
```
@misc{favorito2020,
Author = {Marco Favorito},
Title = {yarllib: Yet Another Reinforcement Learning Library},
Year = {2020},
}
```
An e-print will come soon :-)