Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/leggedrobotics/legged_gym
Isaac Gym Environments for Legged Robots
https://github.com/leggedrobotics/legged_gym
Last synced: 3 days ago
JSON representation
Isaac Gym Environments for Legged Robots
- Host: GitHub
- URL: https://github.com/leggedrobotics/legged_gym
- Owner: leggedrobotics
- License: other
- Created: 2021-10-18T12:50:04.000Z (about 3 years ago)
- Default Branch: master
- Last Pushed: 2024-08-02T10:52:49.000Z (5 months ago)
- Last Synced: 2024-10-29T15:38:50.925Z (2 months ago)
- Language: Python
- Size: 35.3 MB
- Stars: 1,333
- Watchers: 39
- Forks: 376
- Open Issues: 45
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-legged-locomotion-learning - [**legged_gym**
- awesome-isaac-gym - Code
README
# Isaac Gym Environments for Legged Robots #
This repository provides the environment used to train ANYmal (and other robots) to walk on rough terrain using NVIDIA's Isaac Gym.
It includes all components needed for sim-to-real transfer: actuator network, friction & mass randomization, noisy observations and random pushes during training.**Maintainer**: Nikita Rudin
**Affiliation**: Robotic Systems Lab, ETH Zurich
**Contact**: [email protected]---
### :bell: Announcement (09.01.2024) ###
With the shift from Isaac Gym to Isaac Sim at NVIDIA, we have migrated all the environments from this work to [Isaac Lab](https://github.com/isaac-sim/IsaacLab). Following this migration, this repository will receive limited updates and support. We encourage all users to migrate to the new framework for their applications.
Information about this work's locomotion-related tasks in Isaac Lab is available [here](https://isaac-sim.github.io/IsaacLab/source/features/environments.html#locomotion).
---
### Useful Links ###
Project website: https://leggedrobotics.github.io/legged_gym/
Paper: https://arxiv.org/abs/2109.11978### Installation ###
1. Create a new python virtual env with python 3.6, 3.7 or 3.8 (3.8 recommended)
2. Install pytorch 1.10 with cuda-11.3:
- `pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio==0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html`
3. Install Isaac Gym
- Download and install Isaac Gym Preview 3 (Preview 2 will not work!) from https://developer.nvidia.com/isaac-gym
- `cd isaacgym/python && pip install -e .`
- Try running an example `cd examples && python 1080_balls_of_solitude.py`
- For troubleshooting check docs `isaacgym/docs/index.html`)
4. Install rsl_rl (PPO implementation)
- Clone https://github.com/leggedrobotics/rsl_rl
- `cd rsl_rl && git checkout v1.0.2 && pip install -e .`
5. Install legged_gym
- Clone this repository
- `cd legged_gym && pip install -e .`### CODE STRUCTURE ###
1. Each environment is defined by an env file (`legged_robot.py`) and a config file (`legged_robot_config.py`). The config file contains two classes: one containing all the environment parameters (`LeggedRobotCfg`) and one for the training parameters (`LeggedRobotCfgPPo`).
2. Both env and config classes use inheritance.
3. Each non-zero reward scale specified in `cfg` will add a function with a corresponding name to the list of elements which will be summed to get the total reward.
4. Tasks must be registered using `task_registry.register(name, EnvClass, EnvConfig, TrainConfig)`. This is done in `envs/__init__.py`, but can also be done from outside of this repository.### Usage ###
1. Train:
```python legged_gym/scripts/train.py --task=anymal_c_flat```
- To run on CPU add following arguments: `--sim_device=cpu`, `--rl_device=cpu` (sim on CPU and rl on GPU is possible).
- To run headless (no rendering) add `--headless`.
- **Important**: To improve performance, once the training starts press `v` to stop the rendering. You can then enable it later to check the progress.
- The trained policy is saved in `issacgym_anymal/logs//_/model_.pt`. Where `` and `` are defined in the train config.
- The following command line arguments override the values set in the config files:
- --task TASK: Task name.
- --resume: Resume training from a checkpoint
- --experiment_name EXPERIMENT_NAME: Name of the experiment to run or load.
- --run_name RUN_NAME: Name of the run.
- --load_run LOAD_RUN: Name of the run to load when resume=True. If -1: will load the last run.
- --checkpoint CHECKPOINT: Saved model checkpoint number. If -1: will load the last checkpoint.
- --num_envs NUM_ENVS: Number of environments to create.
- --seed SEED: Random seed.
- --max_iterations MAX_ITERATIONS: Maximum number of training iterations.
2. Play a trained policy:
```python legged_gym/scripts/play.py --task=anymal_c_flat```
- By default, the loaded policy is the last model of the last run of the experiment folder.
- Other runs/model iteration can be selected by setting `load_run` and `checkpoint` in the train config.### Adding a new environment ###
The base environment `legged_robot` implements a rough terrain locomotion task. The corresponding cfg does not specify a robot asset (URDF/ MJCF) and has no reward scales.1. Add a new folder to `envs/` with `'_config.py`, which inherit from an existing environment cfgs
2. If adding a new robot:
- Add the corresponding assets to `resources/`.
- In `cfg` set the asset path, define body names, default_joint_positions and PD gains. Specify the desired `train_cfg` and the name of the environment (python class).
- In `train_cfg` set `experiment_name` and `run_name`
3. (If needed) implement your environment in .py, inherit from an existing environment, overwrite the desired functions and/or add your reward functions.
4. Register your env in `isaacgym_anymal/envs/__init__.py`.
5. Modify/Tune other parameters in your `cfg`, `cfg_train` as needed. To remove a reward set its scale to zero. Do not modify parameters of other envs!### Troubleshooting ###
1. If you get the following error: `ImportError: libpython3.8m.so.1.0: cannot open shared object file: No such file or directory`, do: `sudo apt install libpython3.8`. It is also possible that you need to do `export LD_LIBRARY_PATH=/path/to/libpython/directory` / `export LD_LIBRARY_PATH=/path/to/conda/envs/your_env/lib`(for conda user. Replace /path/to/ to the corresponding path.).### Known Issues ###
1. The contact forces reported by `net_contact_force_tensor` are unreliable when simulating on GPU with a triangle mesh terrain. A workaround is to use force sensors, but the force are propagated through the sensors of consecutive bodies resulting in an undesirable behaviour. However, for a legged robot it is possible to add sensors to the feet/end effector only and get the expected results. When using the force sensors make sure to exclude gravity from the reported forces with `sensor_options.enable_forward_dynamics_forces`. Example:
```
sensor_pose = gymapi.Transform()
for name in feet_names:
sensor_options = gymapi.ForceSensorProperties()
sensor_options.enable_forward_dynamics_forces = False # for example gravity
sensor_options.enable_constraint_solver_forces = True # for example contacts
sensor_options.use_world_frame = True # report forces in world frame (easier to get vertical components)
index = self.gym.find_asset_rigid_body_index(robot_asset, name)
self.gym.create_asset_force_sensor(robot_asset, index, sensor_pose, sensor_options)
(...)sensor_tensor = self.gym.acquire_force_sensor_tensor(self.sim)
self.gym.refresh_force_sensor_tensor(self.sim)
force_sensor_readings = gymtorch.wrap_tensor(sensor_tensor)
self.sensor_forces = force_sensor_readings.view(self.num_envs, 4, 6)[..., :3]
(...)self.gym.refresh_force_sensor_tensor(self.sim)
contact = self.sensor_forces[:, :, 2] > 1.
```