Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/khaledsharif/omniverse-gym
Examples of how to use NVIDIA Omniverse Isaac Sim to solve Reinforcement Learning Games (rl-games)
https://github.com/khaledsharif/omniverse-gym
reinforcement-learning robotics simulation
Last synced: 30 days ago
JSON representation
Examples of how to use NVIDIA Omniverse Isaac Sim to solve Reinforcement Learning Games (rl-games)
- Host: GitHub
- URL: https://github.com/khaledsharif/omniverse-gym
- Owner: KhaledSharif
- License: mit
- Created: 2024-05-04T22:40:17.000Z (8 months ago)
- Default Branch: main
- Last Pushed: 2024-06-01T20:48:38.000Z (7 months ago)
- Last Synced: 2024-06-01T22:48:28.692Z (7 months ago)
- Topics: reinforcement-learning, robotics, simulation
- Language: Python
- Homepage:
- Size: 217 KB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# omniverse-gym
Examples of how to use NVIDIA Omniverse Isaac Sim for to solve Reinforcement Learning Games (RL-Games)## Installation
Follow the Isaac Sim [documentation](https://docs.omniverse.nvidia.com/isaacsim/latest/installation/install_workstation.html) to install the latest Isaac Sim release (2023.1.1)
To install `omniisaacgymenvs`, first clone this repository:
```bash
git clone https://github.com/KhaledSharif/omniverse-gym.git
```Once cloned, locate the [python executable in Isaac Sim](https://docs.omniverse.nvidia.com/isaacsim/latest/installation/install_python.html). By default, this should be `python.sh`. We will refer to this path as `PYTHON_PATH`.
To set a `PYTHON_PATH` variable in the terminal that links to the python executable, we can run a command that resembles the following. Make sure to update the paths to your local path. For Linux:
```bash
alias PYTHON_PATH=~/.local/share/ov/pkg/isaac_sim-2023.1.1/python.sh
```Install the repository and its dependencies:
```bash
PYTHON_PATH -m pip install -e .
```To run a simple form of PPO from `rl_games`, use the single-threaded training script:
```bash
PYTHON_PATH run.py task=Cartpole
```The result is saved to the current working directory in a new directory called `runs`.
You can now evaluate your model by running the same environment in test (inference) mode using the saved model checkpoint.
```bash
PYTHON_PATH run.py task=Cartpole test=True checkpoint=runs/Cartpole/nn/Cartpole.pth
```