https://github.com/VinF/deer
DEEp Reinforcement learning framework
https://github.com/VinF/deer
deep-reinforcement-learning policy-gradient q-learning
Last synced: 3 months ago
JSON representation
DEEp Reinforcement learning framework
- Host: GitHub
- URL: https://github.com/VinF/deer
- Owner: VinF
- License: other
- Created: 2016-01-21T10:29:30.000Z (over 9 years ago)
- Default Branch: master
- Last Pushed: 2024-05-01T15:00:31.000Z (over 1 year ago)
- Last Synced: 2024-11-19T17:29:42.295Z (11 months ago)
- Topics: deep-reinforcement-learning, policy-gradient, q-learning
- Language: Python
- Homepage:
- Size: 12.6 MB
- Stars: 485
- Watchers: 50
- Forks: 126
- Open Issues: 4
-
Metadata Files:
- Readme: README.rst
- License: LICENSE
Awesome Lists containing this project
README
.. -*- mode: rst -*-
|Python27|_ |Python36|_ |PyPi|_ |License|_
.. |Python27| image:: https://img.shields.io/badge/python-2.7-blue.svg
.. _Python27: https://badge.fury.io/py/deer.. |Python36| image:: https://img.shields.io/badge/python-3.6-blue.svg
.. _Python36: https://badge.fury.io/py/deer.. |PyPi| image:: https://badge.fury.io/py/deer.svg
.. _PyPi: https://badge.fury.io/py/deer.. |License| image:: https://img.shields.io/badge/license-MIT-blue.svg
.. _License: https://github.com/VinF/deer/blob/master/LICENSEDeeR
====DeeR is a python library for Deep Reinforcement. It is build with modularity in mind so that it can easily be adapted to any need. It provides many possibilities out of the box such as Double Q-learning, prioritized Experience Replay, Deep deterministic policy gradient (DDPG), Combined Reinforcement via Abstract Representations (CRAR). Many different environment examples are also provided (some of them using OpenAI gym).
Dependencies
============This framework is tested to work under Python 3.6.
The required dependencies are NumPy >= 1.10, joblib >= 0.9. You also need Keras>=2.6.
For running the examples, Matplotlib >= 1.1.1 is required.
For running the atari games environment, you need to install ALE >= 0.4.Full Documentation
==================The documentation is available at : http://deer.readthedocs.io/