https://github.com/alvinwan/deep-q-learning-simplified
Numpy-only Q-learning Neural Network
https://github.com/alvinwan/deep-q-learning-simplified
atari deep-reinforcement-learning
Last synced: 11 months ago
JSON representation
Numpy-only Q-learning Neural Network
- Host: GitHub
- URL: https://github.com/alvinwan/deep-q-learning-simplified
- Owner: alvinwan
- License: apache-2.0
- Created: 2017-03-28T08:44:43.000Z (almost 9 years ago)
- Default Branch: master
- Last Pushed: 2017-05-20T21:44:32.000Z (almost 9 years ago)
- Last Synced: 2025-04-07T03:42:43.544Z (12 months ago)
- Topics: atari, deep-reinforcement-learning
- Language: Python
- Homepage:
- Size: 4.32 MB
- Stars: 4
- Watchers: 0
- Forks: 2
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Deep Q-learning Neural Network (Simplified)
This repository provides a CPU, Tensorflow-less, simplified model. See the [Tensorflow implementation](http://github.com/alvinwan/deep-q-learning).
# Install
The project is written in Python 3 and is not guaranteed to successfully backport to Python 2.
(Optional) We recommend setting up a virtual environment.
```
virtualenv dqn --python=python3
source activate dqn/bin/activate
```
Say `$DQN_ROOT` is the root of your repository. Navigate to your root repository.
```
cd $DQN_ROOT
```
We need to setup our Python dependencies.
```
pip install -r requirements.txt
```
# Run
```
python run_dqn.py
```
Here are full usage instructions:
```
Usage:
run_dqn.py [options]
Options:
--batch-size= Batch size [default: 32]
--envid= Environment id [default: SpaceInvadersNoFrameskip-v3]
--timesteps= Number of timesteps to run [default: 40000000]
```