Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/fedebotu/vision-cartpole-dqn
Implementation of the CartPole from OpenAI's Gym using only visual input for Reinforcement Learning control with DQN
https://github.com/fedebotu/vision-cartpole-dqn
cartpole computer-vision dqn inverted-pendulum model-free model-free-control model-free-rl openai-gym reinforcement-learning vision-cartpole
Last synced: about 2 months ago
JSON representation
Implementation of the CartPole from OpenAI's Gym using only visual input for Reinforcement Learning control with DQN
- Host: GitHub
- URL: https://github.com/fedebotu/vision-cartpole-dqn
- Owner: fedebotu
- License: gpl-3.0
- Created: 2019-07-15T09:55:30.000Z (over 5 years ago)
- Default Branch: save_model
- Last Pushed: 2021-04-08T04:10:47.000Z (over 3 years ago)
- Last Synced: 2023-03-04T22:57:51.325Z (almost 2 years ago)
- Topics: cartpole, computer-vision, dqn, inverted-pendulum, model-free, model-free-control, model-free-rl, openai-gym, reinforcement-learning, vision-cartpole
- Language: Jupyter Notebook
- Homepage:
- Size: 1.97 MB
- Stars: 9
- Watchers: 0
- Forks: 3
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Vision-Based CartPole with DQN
Implementation of the CartPole from OpenAI's Gym using only visual input
for Reinforcement Learning control with Deep Q-Networks
***Author:*** Federico Berto
Thesis Project for University of Bologna;
Reinforcement Learning: a Preliminary Study on Vision-Based ControlA special thanks goes to `Adam Paszke `_,
for a first implementation of the DQN algorithm with vision input in
the Cartpole-V0 environment from OpenAI Gym.
`Gym website `__.The goal of this project is to design a control system for stabilizing a
Cart and Pole using Deep Reinforcement Learning, having only images as
control inputs. We implement the vision-based control using the DQN algorithm
combined with Convolutional Neural Network for Q-values approximation.
The last two frames of the Cartpole are used as input, cropped and processed
before using them in the Neural Network. In order to stabilize the training,
we use an experience replay buffer as shown in the paper "Playing Atari with
Deep Reinforcement Learning:
__.Besides, a target network to further stabilize the training process is used.
make the training not converge, we set a threshold for stopping training
when we detect stable improvements: this way we learn optimal behavior
without saturation.## Version 1
This version is less polished and in a `.py` file.
The GUI is a handy tool for saving and loading trained models, and also for
training start/stop. Models and Graphs are saved in Vision_Carpole/save_model
and Vision_Cartpole/save_graph respectively.## Version 2
This `.ipynb` (Jupyter Notebook) version is clearer and with a more stable training.
The architecture is as following:
You may find more information inside the PDF report too.
Final score averaged over 6 runs mean ± std
If you want to improve this project, your help is always welcome! 😄