https://github.com/imoneoi/xrl-script
Efficient AutoRL script for any framework
https://github.com/imoneoi/xrl-script
auto-ml deep-reinforcement-learning distributed hyperparameter-optimization hyperparameter-search parallel reinforcement-learning
Last synced: 7 months ago
JSON representation
Efficient AutoRL script for any framework
- Host: GitHub
- URL: https://github.com/imoneoi/xrl-script
- Owner: imoneoi
- License: mit
- Created: 2020-08-05T02:49:13.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2020-08-05T03:16:14.000Z (over 5 years ago)
- Last Synced: 2024-10-29T12:37:47.475Z (over 1 year ago)
- Topics: auto-ml, deep-reinforcement-learning, distributed, hyperparameter-optimization, hyperparameter-search, parallel, reinforcement-learning
- Language: Python
- Homepage:
- Size: 5.86 KB
- Stars: 5
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# XRL-Script
Efficient distributed AutoRL script for any framework
## Features
- [x] Distributed training on single machine, multiple CPU/GPUs
- [x] Automatic resource allocation
- [x] TPESampler and Hyperband pruner support
- [ ] Automatic testing for stable hyperparameter set
## Usage
*In rl.py*
1. Implement your training logic in train_rl_agent
2. Specify hyperparameter range
*In rl-auto-gpu.py*
1. Specify minimum and maximum steps per trial, and number of trials
2. Set parallel trials per GPU & parallel envs per trial according to your hardware
3. Set reduction factor (integer). In most situations you can set one so that Hyperband bracket number stays in [4, 6]
**Then run rl-auto-gpu.py, the script will find all available gpus and run hyperparameter search in parallel**
## Dependencies
- [Optuna](https://github.com/optuna/optuna)