Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/daniel-m-campos/deep_rl
Deep Reinforcement Learning examples from Udacity's Deep RL Nano Degree
https://github.com/daniel-m-campos/deep_rl
deep-reinforcement-learning udacity unity
Last synced: about 23 hours ago
JSON representation
Deep Reinforcement Learning examples from Udacity's Deep RL Nano Degree
- Host: GitHub
- URL: https://github.com/daniel-m-campos/deep_rl
- Owner: daniel-m-campos
- Created: 2021-08-09T19:02:38.000Z (over 3 years ago)
- Default Branch: master
- Last Pushed: 2021-10-06T00:09:27.000Z (about 3 years ago)
- Last Synced: 2023-12-28T04:25:17.626Z (11 months ago)
- Topics: deep-reinforcement-learning, udacity, unity
- Language: Python
- Homepage:
- Size: 2.55 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Deep RL [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
## Environments
1. [Navigation](docs/Navigation.md)
2. [Continuous Control](docs/ContinuousControl.md)
3. [Tennis](docs/Tennis.md)## Installation
To easily install the package, clone the repository and use a `virtualenv` to pip install the package in developer mode.
```bash
git clone https://github.com/daniel-m-campos/deep_rl.git
cd deep_rl
python -m venv venv # make sure Python 3.6
. venv/bin/activate
pip install -e .
```### Requirements
See `requirements.txt` and `text-requiremnets.txt`. These are installed during the `pip install` step.
### Binary dependencies
The package depends on the Udacity's Unity Environments. See [Environments](#Environments) for the binary download
links.The default binary paths are set in the `Environment` implementations and are of the
form `/usr/local/sbin/.x86_64`. See the `Navigation` class in `environment.py` for an example. You can
either symlink the downloaded binaries to the default directories or pass the `binary_path` when running the package.## Usage
The package provides a [Fire](https://github.com/google/python-fire) CLI for training and playing the agent. To see the
basic commands:```bash
cd deep_rl
. venv/bin/activate
python -m deep_rl --help
```Where `` is either `train` or `play`. See `deep_rl/__main__.py` as well as the `__init__` method of `Agent`
implementations in `deep_rl/agent.py`### Train
To train an agent in the Navigation/Banana Unity environment with default parameters, run:
```bash
cd deep_rl
. venv/bin/activate
python -m deep_rl train navigation
```To train with custom parameters, run for example:
```bash
python -m deep_rl train navigation \
--n_episodes=100 \
--save_path=None \
--image_path=None \
--learning_rate=5e-3
```### Play
#### Navigation
To play an agent in the Navigation/Banana environment with default parameters, run:
```bash
cd deep_rl
. venv/bin/activate
python -m deep_rl play navigation
```To play with alternative network, run
```bash
python -m deep_rl play navigation --load_path="path_to_your/network.pth"
```#### Continuous Control
To play an agent in the Continuous Control/Reacher environment with default parameters, run:
```bash
cd deep_rl
. venv/bin/activate
python -m deep_rl play continuous_control
```#### Tennis
To play an agent in the Tennis environment with default parameters, run:
```bash
cd deep_rl
. venv/bin/activate
python -m deep_rl play tennis
```