https://github.com/gsurma/cartpole
OpenAI's cartpole env solver.
https://github.com/gsurma/cartpole
ai cartpole cartpole-v1 dqn dqn-solver machine-learning openai openai-gym python python27 reinforcement-learning
Last synced: 12 months ago
JSON representation
OpenAI's cartpole env solver.
- Host: GitHub
- URL: https://github.com/gsurma/cartpole
- Owner: gsurma
- License: mit
- Created: 2018-08-18T00:25:15.000Z (over 7 years ago)
- Default Branch: master
- Last Pushed: 2023-02-17T21:27:04.000Z (about 3 years ago)
- Last Synced: 2025-03-24T08:19:05.991Z (about 1 year ago)
- Topics: ai, cartpole, cartpole-v1, dqn, dqn-solver, machine-learning, openai, openai-gym, python, python27, reinforcement-learning
- Language: Python
- Homepage: https://gsurma.github.io
- Size: 1.04 MB
- Stars: 154
- Watchers: 9
- Forks: 113
- Open Issues: 4
-
Metadata Files:
- Readme: README.md
- Funding: .github/FUNDING.yml
- License: LICENSE
Awesome Lists containing this project
README
# Cartpole
Reinforcement Learning solution of the [OpenAI's Cartpole](https://gym.openai.com/envs/CartPole-v0/).
Check out corresponding Medium article: [Cartpole - Introduction to Reinforcement Learning (DQN - Deep Q-Learning)](https://towardsdatascience.com/cartpole-introduction-to-reinforcement-learning-ed0eb5b58288)
## About
> A pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The system is controlled by applying a force of +1 or -1 to the cart. The pendulum starts upright, and the goal is to prevent it from falling over. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center. [source](https://gym.openai.com/envs/CartPole-v0/)
## DQN
Standard DQN with Experience Replay.
### Hyperparameters:
* GAMMA = 0.95
* LEARNING_RATE = 0.001
* MEMORY_SIZE = 1000000
* BATCH_SIZE = 20
* EXPLORATION_MAX = 1.0
* EXPLORATION_MIN = 0.01
* EXPLORATION_DECAY = 0.995
### Model structure:
1. Dense layer - input: **4**, output: **24**, activation: **relu**
2. Dense layer - input **24**, output: **24**, activation: **relu**
3. Dense layer - input **24**, output: **2**, activation: **linear**
* **MSE** loss function
* **Adam** optimizer
## Performance
> CartPole-v0 defines "solving" as getting average reward of 195.0 over 100 consecutive trials. [source](https://gym.openai.com/envs/CartPole-v0/)
>
##### Example trial gif

##### Example trial chart

##### Solved trials chart

## Author
**Greg (Grzegorz) Surma**
[**PORTFOLIO**](https://gsurma.github.io)
[**GITHUB**](https://github.com/gsurma)
[**BLOG**](https://medium.com/@gsurma)