https://github.com/mo42/rleval
Evaluate (test and compare) Reinforcement Learning Algorithms
https://github.com/mo42/rleval
agent algorithm gradient-descent reinforcement-learning reward
Last synced: 7 months ago
JSON representation
Evaluate (test and compare) Reinforcement Learning Algorithms
- Host: GitHub
- URL: https://github.com/mo42/rleval
- Owner: mo42
- Created: 2016-11-25T20:12:00.000Z (almost 9 years ago)
- Default Branch: master
- Last Pushed: 2024-02-04T19:09:56.000Z (over 1 year ago)
- Last Synced: 2025-03-06T11:06:26.850Z (7 months ago)
- Topics: agent, algorithm, gradient-descent, reinforcement-learning, reward
- Language: C++
- Homepage:
- Size: 45.9 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# RLTest
RLTest is a C++ library for testing reinforcement learning algorithms. It was
developed in parallel to a course in reinforcement learning in 2016.The main goal was to implement a couple of reinforcement learning [algorithms](#algorithms)
and a couple of artificial [worlds](#worlds) in which the actions of the algorithms can be
evaluated and compared.## Requirements:
- C++ compiler
- CMake## Installation
```sh
git clone https://github.com/mo42/RLEval.git && cd RLEval
git submodule update --init --recursive
mkdir build && cd build
cmake ../
make
```## Algorithms
- PoWER
- Simple Policy Gradient
- SARSA (discrete)
- Q-learning (discrete)
- TDLearning (discrete)## Worlds
- Cart pole world (balancing a pole on a cart)
- Mountain car world (drive car uphill by building up momentum)
- Simple and discrete maze world
- Discrete cliff world (
- Adapter world (a continuous world that can be instantiated with a discrete
world. With one-hot coding, algorithms for continuous worlds can work on
discrete worlds.)