https://github.com/sash-a/scalablehrles.jl
OpenAI's ES used in a feudal hrl style
https://github.com/sash-a/scalablehrles.jl
deep-neuroevolution evolution-strategies hierarchical-reinforcement-learning reinforcement-learning
Last synced: 14 days ago
JSON representation
OpenAI's ES used in a feudal hrl style
- Host: GitHub
- URL: https://github.com/sash-a/scalablehrles.jl
- Owner: sash-a
- Created: 2021-09-09T08:11:47.000Z (almost 4 years ago)
- Default Branch: master
- Last Pushed: 2022-01-24T08:49:57.000Z (over 3 years ago)
- Last Synced: 2025-06-04T04:08:48.635Z (24 days ago)
- Topics: deep-neuroevolution, evolution-strategies, hierarchical-reinforcement-learning, reinforcement-learning
- Language: Julia
- Homepage:
- Size: 1.51 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Scalable hierarchical evolution strategies
This is mostly Julia's wonderful multiple dispatch on top of my [ScalableES](https://github.com/sash-a/ScalableES.jl/) codebase. Most of the structs in here simply hold two of the similar type from ScalableES, one for the controller (c) and another for the primitive (p) for example:
```
mutable struct HrlAdam <: ScalableES.AbstractOptim
copt::ScalableES.Adam
popt::ScalableES.AdamHrlAdam(cdim::Int, pdim::Int, lr::Real) = new(ScalableES.Adam(cdim, lr), ScalableES.Adam(pdim, lr))
end
```## How to run
For install instructions see [ScalableES](https://github.com/sash-a/ScalableES.jl/)
```
julia --project -t 8 scripts/runner.jl config/cfg.yml
```## Examples
Envs that have been trained using this method:
### Ant Gather
### Ant Maze
### Ant Push
