Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/fedebotu/vision-cartpole-dqn

Implementation of the CartPole from OpenAI's Gym using only visual input for Reinforcement Learning control with DQN
https://github.com/fedebotu/vision-cartpole-dqn

cartpole computer-vision dqn inverted-pendulum model-free model-free-control model-free-rl openai-gym reinforcement-learning vision-cartpole

Last synced: about 2 months ago
JSON representation

Implementation of the CartPole from OpenAI's Gym using only visual input for Reinforcement Learning control with DQN

Awesome Lists containing this project

README

        

# Vision-Based CartPole with DQN


Python
PyTorch

Implementation of the CartPole from OpenAI's Gym using only visual input
for Reinforcement Learning control with Deep Q-Networks



***Author:*** Federico Berto

Thesis Project for University of Bologna;
Reinforcement Learning: a Preliminary Study on Vision-Based Control

A special thanks goes to `Adam Paszke `_,
for a first implementation of the DQN algorithm with vision input in
the Cartpole-V0 environment from OpenAI Gym.
`Gym website `__.

The goal of this project is to design a control system for stabilizing a
Cart and Pole using Deep Reinforcement Learning, having only images as
control inputs. We implement the vision-based control using the DQN algorithm
combined with Convolutional Neural Network for Q-values approximation.


Agent Environment

The last two frames of the Cartpole are used as input, cropped and processed
before using them in the Neural Network. In order to stabilize the training,
we use an experience replay buffer as shown in the paper "Playing Atari with
Deep Reinforcement Learning:
__.

Besides, a target network to further stabilize the training process is used.
make the training not converge, we set a threshold for stopping training
when we detect stable improvements: this way we learn optimal behavior
without saturation.

## Version 1

This version is less polished and in a `.py` file.
The GUI is a handy tool for saving and loading trained models, and also for
training start/stop. Models and Graphs are saved in Vision_Carpole/save_model
and Vision_Cartpole/save_graph respectively.

## Version 2
This `.ipynb` (Jupyter Notebook) version is clearer and with a more stable training.
The architecture is as following:


DQN Architecture

You may find more information inside the PDF report too.


DQN Final Score over 6 Runs

Final score averaged over 6 runs mean ± std

If you want to improve this project, your help is always welcome! 😄