Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/dayyass/rllib
Reinforcement Learning Library.
https://github.com/dayyass/rllib
data-science deep-learning machine-learning python reinforcement-learning
Last synced: 4 months ago
JSON representation
Reinforcement Learning Library.
- Host: GitHub
- URL: https://github.com/dayyass/rllib
- Owner: dayyass
- License: mit
- Created: 2022-06-28T21:06:59.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2022-08-16T10:55:15.000Z (over 2 years ago)
- Last Synced: 2024-09-29T21:39:38.183Z (5 months ago)
- Topics: data-science, deep-learning, machine-learning, python, reinforcement-learning
- Language: Python
- Homepage: https://pypi.org/project/pytorch-rllib/
- Size: 55.7 KB
- Stars: 29
- Watchers: 1
- Forks: 0
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
[](https://github.com/dayyass/rllib/actions/workflows/tests.yml)
[](https://github.com/dayyass/rllib/actions/workflows/linter.yml)
[](https://codecov.io/gh/dayyass/rllib)[](https://github.com/dayyass/rllib#requirements)
[](https://github.com/dayyass/rllib/releases/latest)
[](https://github.com/dayyass/rllib/blob/main/LICENSE)[](https://github.com/dayyass/rllib/blob/main/.pre-commit-config.yaml)
[](https://github.com/psf/black)[](https://pypi.org/project/pytorch-rllib)
[](https://pypi.org/project/pytorch-rllib)# rllib
Reinforcement Learning Library## Installation
```
pip install pytorch-rllib
```## Usage
Implemented agents:
- [ ] CrossEntropy
- [ ] Value / Policy Iteration
- [x] Q-Learning
- [x] Expected Value SARSA
- [x] Approximate Q-Learning
- [x] DQN
- [ ] Rainbow
- [ ] REINFORCE
- [ ] A2C```python3
import gym
import numpy as np
import torchfrom rllib.qlearning import ApproximateQLearningAgent
from rllib.trainer import TrainerTorch as Trainer
from rllib.utils import set_global_seeddevice = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# init environment
env = gym.make("CartPole-v0")
set_global_seed(seed=42, env=env)n_actions = env.action_space.n
n_state = env.observation_space.shape[0]# init torch model
model = torch.nn.Sequential()
model.add_module("layer1", torch.nn.Linear(n_state, 128))
model.add_module("relu1", torch.nn.ReLU())
model.add_module("layer2", torch.nn.Linear(128, 64))
model.add_module("relu2", torch.nn.ReLU())
model.add_module("values", torch.nn.Linear(64, n_actions))
model = model.to(device)# init agent
agent = ApproximateQLearningAgent(
model=model,
alpha=0.5,
epsilon=0.5,
discount=0.99,
n_actions=n_actions,
)# train
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)trainer = Trainer(env=env)
train_rewards = trainer.train(
agent=agent,
optimizer=optimizer,
n_epochs=20,
n_sessions=100,
)# train results
print(f"Mean train reward: {np.mean(train_rewards[-10:])}") # reward: 120.318# inference
inference_reward = trainer.play_session(
agent=agent,
t_max=10**4,
)# inference results
print(f"Inference reward: {inference_reward}") # reward: 171.0
```More examples you can find [here](https://github.com/dayyass/rllib/tree/main/examples).
## Requirements
Python >= 3.7## Citation
If you use **rllib** in a scientific publication, we would appreciate references to the following BibTex entry:
```bibtex
@misc{dayyass2022rllib,
author = {El-Ayyass, Dani},
title = {Reinforcement Learning Library},
howpublished = {\url{https://github.com/dayyass/rllib}},
year = {2022}
}
```