Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/YingYuan0414/in-hand-rotation

Robot In-hand Rotation
https://github.com/YingYuan0414/in-hand-rotation

Last synced: about 2 months ago
JSON representation

Robot In-hand Rotation

Awesome Lists containing this project

README

        

# Robot Synesthesia Codebase

This repository is a code implementation of the following paper:

**[Robot Synesthesia: In-Hand Manipulation with Visuotactile Sensing](https://yingyuan0414.github.io/visuotactile/)**

Ying Yuan, Haichuan Che, Yuzhe Qin, Binghao Huang, Zhao-Heng Yin, Kang-Won Lee, Yi Wu, Soo-Chul Lim, Xiaolong Wang

International Conference on Robotics and Automation (ICRA), 2024

https://github.com/YingYuan0414/robosyn/assets/74405101/77c87802-59e7-43ac-afc1-330df13fb930

## Preparation

We suggest using conda environment with python 3.8. Install **Isaac Gym Preview 4 release** on your laptop (should have a GPU) and server (follow the instructions on the NVIDIA's website, you need to register an account). Other required packages include **pytorch3d**, **hydra-core**, **ray**, **tensorboard**, **wandb**, etc. You may install them via **pip**.

For detailed instruction, see [install.md](install.md).

## Launch Training
### Teacher Policy Training

(1) For training a state-based policy for double-ball rotation, run
```
scripts/teacher_baoding.sh 0
```
(2) For training a state-based policy for wheel-wrench rotation, run
```
scripts/teacher_cross.sh 0
```
(3) For training a state-based policy for three-axis rotation, run
```
scripts/teacher_axis.sh 0 task.env.axis=x experiment=x-axis train.params.config.user_prefix=x-axis
```
```
scripts/teacher_axis.sh 0 task.env.axis=y experiment=y-axis train.params.config.user_prefix=y-axis
```
```
scripts/teacher_axis.sh 0 task.env.axis=z experiment=z-axis train.params.config.user_prefix=z-axis
```

We also provide code for training [PS (Rotating Without Seeing)](https://touchdexterity.github.io/) and Visual RL baselines as follows:

(1) For PS baselines, run
```
scripts/teacher_baoding.sh 0 task.env.observationType=partial_stack_baoding experiment=baoding-ps train.params.config.user_prefix=baoding-ps
```
```
scripts/teacher_cross.sh 0 task.env.observationType=partial_stack experiment=wheel-wrench-ps train.params.config.user_prefix=wheel-wrench-ps
```
```
scripts/teacher_axis.sh 0 task.env.observationType=partial_stack task.env.axis=[axis] experiment=[axis]-ps train.params.config.user_prefix=[axis]-ps
```

(2) For Visual RL baselines, run
```
scripts/teacher_baoding_visrl.sh 0
```
```
scripts/teacher_cross_visrl.sh 0
```
```
scripts/teacher_axis_visrl.sh 0 task.env.axis=[axis] experiment=[axis]-visualrl train.params.config.user_prefix=[axis]-visualrl
```

### Student Policy Training
We provide an implementation of behavior cloning here. You may also implement your own method of distillation with the task environment we provide.

#### Data Collection
To collect simulation rollouts of a trained teacher policy checkpoint, run
```
scripts/collect_baoding.sh 0 teacher_logdir=[directory/of/teacher/checkpoint] teacher_resume=[name/of/teacher/checkpoint]
```
```
scripts/collect_cross.sh 0 teacher_logdir=[directory/of/teacher/checkpoint] teacher_resume=[name/of/teacher/checkpoint]
```
```
scripts/collect_axis.sh 0 teacher_logdir=[directory/of/teacher/checkpoint] teacher_resume=[name/of/teacher/checkpoint] task.env.axis=[axis] distill.teacher_data_dir=demonstration-[axis] experiment=bc-[axis]-collect train.params.config.user_prefix=bc-[axis]-collect
```
Collected trajectory data will be stored in `distill.teacher_data_dir`, by default the directory `demonstration-baoding`. You may also collect data in parallel by launching more than one job with different `distill.worker_id`.

#### Behavior Cloning

We then train the student policy with collected data:
```
scripts/bc_baoding.sh 0 distill.teacher_data_dir=[directory/of/data]
```
```
scripts/bc_cross.sh 0 distill.teacher_data_dir=[directory/of/data]
```
```
scripts/bc_axis.sh 0 distill.teacher_data_dir=[directory/of/data] task.env.axis=[axis] distill.student_logdir=runs/student/bc-[axis] experiment=bc-[axis] train.params.config.user_prefix=bc-[axis]
```

To use different sensing capabilities, set `distill.ablation_mode` to:

* `multi-modality-plus` for Touch+Cam+Aug+Syn
* `aug` for Touch+Cam+Aug
* `no-tactile` for Cam+Aug
* `no-pc` for Touch

## Citing
Please cite this work as:
```
@inproceedings{
yuan2024robosyn,
author = {Ying Yuan, Haichuan Che, Yuzhe Qin, Binghao Huang, Zhao-Heng Yin, Kang-Won Lee, Yi Wu, Soo-Chul Lim, Xiaolong Wang},
title = {Robot Synesthesia: In-Hand Manipulation with Visuotactile Sensing},
booktitle = {ICRA},
year = {2024}
}
```

**Note** if you use [Rotating without Seeing](https://touchdexterity.github.io/) in your work, please ensure you cite the following work:
```
@article{
touch-dexterity,
title = {Rotating without Seeing: Towards In-hand Dexterity through Touch },
author = {Yin, Zhao-Heng and Huang, Binghao and Qin, Yuzhe and Chen, Qifeng and Wang, Xiaolong},
journal = {Robotics: Science and Systems},
year = {2023},
}
```