Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/typoverflow/utilsrl
A python module designed for agile RL algorithm developing.
https://github.com/typoverflow/utilsrl
python pytorch reinforcment-learning
Last synced: about 11 hours ago
JSON representation
A python module designed for agile RL algorithm developing.
- Host: GitHub
- URL: https://github.com/typoverflow/utilsrl
- Owner: typoverflow
- License: mit
- Created: 2022-02-06T16:16:05.000Z (over 2 years ago)
- Default Branch: master
- Last Pushed: 2024-07-11T16:34:56.000Z (4 months ago)
- Last Synced: 2024-10-13T08:45:16.567Z (25 days ago)
- Topics: python, pytorch, reinforcment-learning
- Language: Python
- Homepage: https://utilsrl.readthedocs.io
- Size: 269 KB
- Stars: 26
- Watchers: 4
- Forks: 3
- Open Issues: 5
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# UtilsRL
`UtilsRL` is a reinforcement learning utility python package, which is designed for fast integration into other RL projects. Despite its lightweightness, it still provides a full set of functions needed for RL algorithms development.
Currently `UtilsRL` is maintained by researchers from [LAMDA-RL](https://github.com/LAMDA-RL) group. Any bug report / feature request / improvement is appreciated.
## Installation
You can install this package directly from pypi:
```shell
pip install UtilsRL
```
After installation, you may still need to configure some other dependencies based on your platform, such as PyTorch.## Features & Usage
We are still working on the docs, and the docs will be published as soon as possible.
Here we list some highlight features of UtilsRL:
- **Extremely easy-to-use and research friendly argument parsing**. `UtilsRL.exp.argparse` supports several handy features for research:
- loading arguments from both `yaml`, `json`, `python` files and command line
- nested argument parsing
- **Well-implemented torch modules for Reinforcement Learning**
- common network structures: MLP, CNN, RNN, Attention, Ensemble Blocks and etc
- policy networks with various output distributions
- normalizers implemented in `nn.Module`, benefiting saving/loading by taking advantage of `state_dict`
- **Powerful experiment loggers**.
- **Super fast Prioritized Experience Replay (PER) buffer**. By binding c++-implemented data structures, we boost the efficiency of PER up to 10 timesWe provide two examples, namely training PPO on mujoco tasks and training Rainbow on atari tasks as illustrations for integrating UtilsRL into your workflow (see `examples/`)
## Acknowledgements
We took inspiration for module design from [tianshou](https://github.com/thu-ml/tianshou) and [Polixir OfflineRL](https://github.com/polixir/OfflineRL).We also thank [@YuRuiii](https://github.com/YuRuiii) and [@momanto](https://github.com/momanto) for their participation in code testing and performance benchmarking.