https://github.com/embersarc/ppo
PPO implementation for OpenAI gym environment based on Unity ML Agents
https://github.com/embersarc/ppo
Last synced: 2 months ago
JSON representation
PPO implementation for OpenAI gym environment based on Unity ML Agents
- Host: GitHub
- URL: https://github.com/embersarc/ppo
- Owner: EmbersArc
- Created: 2017-12-29T15:57:46.000Z (over 7 years ago)
- Default Branch: master
- Last Pushed: 2018-03-17T13:55:53.000Z (about 7 years ago)
- Last Synced: 2025-02-27T05:56:15.713Z (3 months ago)
- Language: Python
- Size: 7.9 MB
- Stars: 149
- Watchers: 10
- Forks: 21
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# PPO
PPO implementation for OpenAI gym environment based on Unity ML Agents: https://github.com/Unity-Technologies/ml-agentsNotable changes include:
* Ability to continuously display progress with non-stochastic policy during training
* Works with OpenAI environments
* Option to record episodes
* State normalization for given number of frames
* Frame skip
* Faster reward discounting etc.