{"id":13544169,"url":"https://github.com/NVlabs/SceneCollisionNet","last_synced_at":"2025-04-02T14:30:30.830Z","repository":{"id":141601976,"uuid":"376090428","full_name":"NVlabs/SceneCollisionNet","owner":"NVlabs","description":"This repo contains the code for \"Object Rearrangement Using Learned Implicit Collision Functions\", an ICRA 2021 paper. For more information, please visit the project website.","archived":false,"fork":false,"pushed_at":"2021-06-11T16:59:20.000Z","size":3909,"stargazers_count":56,"open_issues_count":2,"forks_count":11,"subscribers_count":5,"default_branch":"main","last_synced_at":"2024-11-03T11:32:35.440Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/NVlabs.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE.pdf","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2021-06-11T16:58:38.000Z","updated_at":"2024-10-30T09:49:16.000Z","dependencies_parsed_at":"2023-06-19T14:36:23.440Z","dependency_job_id":null,"html_url":"https://github.com/NVlabs/SceneCollisionNet","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NVlabs%2FSceneCollisionNet","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NVlabs%2FSceneCollisionNet/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NVlabs%2FSceneCollisionNet/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NVlabs%2FSceneCollisionNet/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/NVlabs","download_url":"https://codeload.github.com/NVlabs/SceneCollisionNet/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":246832071,"owners_count":20841104,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-01T11:00:43.263Z","updated_at":"2025-04-02T14:30:29.156Z","avatar_url":"https://github.com/NVlabs.png","language":"Python","readme":"# SceneCollisionNet\nThis repo contains the code for \"[Object Rearrangement Using Learned Implicit Collision Functions](https://arxiv.org/abs/2011.10726)\", an ICRA 2021 paper. For more information, please visit the [project website](https://research.nvidia.com/publication/2021-03_Object-Rearrangement-Using).\n\n## License\nThis repo is released under [NVIDIA source code license](LICENSE.pdf). For business inquiries, please contact researchinquiries@nvidia.com. For press and other inquiries, please contact Hector Marinez at hmarinez@nvidia.com\n\n## Install and Setup\nClone and install the repo (we recommend a virtual environment, especially if training or benchmarking, to avoid dependency conflicts):\n```shell\ngit clone --recursive https://github.com/mjd3/SceneCollisionNet.git\ncd SceneCollisionNet\npip install -e .\n```\nThese commands install the minimum dependencies needed for generating a mesh dataset and then training/benchmarking using Docker. If you instead wish to train or benchmark without using Docker, please first install an appropriate version of [PyTorch](https://pytorch.org/get-started/locally/) and corresponding version of [PyTorch Scatter](https://github.com/rusty1s/pytorch_scatter) for your system. Then, execute these commands:\n```shell\ngit clone --recursive https://github.com/mjd3/SceneCollisionNet.git\ncd SceneCollisionNet\npip install -e .[train]\n```\nIf benchmarking, replace `train` in the last command with `bench`.\n\nTo rollout the object rearrangement MPPI policy in a simulated tabletop environment, first download [Isaac Gym](https://developer.nvidia.com/isaac-gym) and place it in the `extern` folder within this repo. Next, follow the previous installation instructions for training, but replace the `train` option with `policy`.\n\nTo download the pretrained weights for benchmarking or policy rollout, run `bash scripts/download_weights.sh`.\n\n## Generating a Mesh Dataset\nTo save time during training/benchmarking, meshes are preprocessed and mesh stable poses are calculated offline. SceneCollisionNet was trained using the [ACRONYM dataset](https://sites.google.com/nvidia.com/graspdataset). To use this dataset for training or benchmarking, download the ShapeNetSem meshes [here](https://shapenet.org/) (note: you must first register for an account) and the ACRONYM grasps [here](https://sites.google.com/nvidia.com/graspdataset). Next, build Manifold (an external library included as a submodule):\n```shell\n./scripts/install_manifold.sh\n```\n\nThen, use the following script to generate a preprocessed version of the ACRONYM dataset:\n```shell\npython tools/generate_acronym_dataset.py /path/to/shapenetsem/meshes /path/to/acronym datasets/shapenet\n```\n\nIf you have your own set of meshes, run:\n```shell\npython tools/generate_mesh_dataset.py /path/to/meshes datasets/your_dataset_name\n```\nNote that this dataset will not include grasp data, which is not needed for training or benchmarking SceneCollisionNet, but is be used for rolling out the MPPI policy.\n\n## Training/Benchmarking with Docker\nFirst, install Docker and `nvidia-docker2` following the instructions [here](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installing-on-ubuntu-and-debian). Pull the SceneCollisionNet docker image from DockerHub (tag `scenecollisionnet`) or build locally using the provided Dockerfile (`docker build -t scenecollisionnet .`). Then, use the appropriate configuration `.yaml` file in `cfg` to set training or benchmarking parameters (note that cfg file paths are relative to the Docker container, not the local machine) and run one of the commands below (replacing paths with your local paths as needed; `-v` requires absolute paths).\n\n### Train a SceneCollisionNet\nEdit `cfg/train_scenecollisionnet.yaml`, then run:\n```shell\ndocker run --gpus all --rm -it -v /path/to/dataset:/dataset:ro -v /path/to/models:/models:rw -v /path/to/cfg:/cfg:ro scenecollisionnet /SceneCollisionNet/scripts/train_scenecollisionnet_docker.sh\n```\n\n### Train a RobotCollisionNet\nEdit `cfg/train_robotcollisionnet.yaml`, then run:\n```shell\ndocker run --gpus all --rm -it -v /path/to/models:/models:rw -v /path/to/cfg:/cfg:ro scenecollisionnet /SceneCollisionNet/scripts/train_robotcollisionnet_docker.sh\n```\n\n### Benchmark a SceneCollisionNet\nEdit `cfg/benchmark_scenecollisionnet.yaml`, then run:\n```shell\ndocker run --gpus all --rm -it -v /path/to/dataset:/dataset:ro -v /path/to/models:/models:ro -v /path/to/cfg:/cfg:ro -v /path/to/benchmark_results:/benchmark:rw scenecollisionnet /SceneCollisionNet/scripts/benchmark_scenecollisionnet_docker.sh\n```\n\n\n### Benchmark a RobotCollisionNet\nEdit `cfg/benchmark_robotcollisionnet.yaml`, then run:\n```shell\ndocker run --gpus all --rm -it -v /path/to/models:/models:rw -v /path/to/cfg:/cfg:ro -v /path/to/benchmark_results:/benchmark:rw scenecollisionnet /SceneCollisionNet/scripts/train_robotcollisionnet_docker.sh\n```\n\n### Loss Plots\nTo get loss plots while training, run:\n```shell\ndocker exec -d \u003ccontainer_name\u003e python3 tools/loss_plots.py /models/\u003cmodel_name\u003e/log.csv\n```\n\n### Benchmark FCL or SDF Baselines\nEdit `cfg/benchmark_baseline.yaml`, then run:\n```shell\ndocker run --gpus all --rm -it -v /path/to/dataset:/dataset:ro -v /path/to/benchmark_results:/benchmark:rw -v /path/to/cfg:/cfg:ro scenecollisionnet /SceneCollisionNet/scripts/benchmark_baseline_docker.sh\n```\n\n## Training/Benchmarking without Docker\nFirst, install system dependencies. The system dependencies listed assume an Ubuntu 18.04 install with NVIDIA drivers \u003e= 450.80.02 and CUDA 10.2. You can adjust the dependencies accordingly for different driver/CUDA versions. Note that the NVIDIA drivers come packaged with EGL, which is used during training and benchmarking for headless rendering on the GPU.\n\n### System Dependencies\nSee Dockerfile for a full list. For training/benchmarking, you will need:\n```\npython3-dev\npython3-pip\nninja-build\nlibcudnn8=8.1.1.33-1+cuda10.2\nlibcudnn8-dev=8.1.1.33-1+cuda10.2\nlibsm6\nlibxext6\nlibxrender-dev\nfreeglut3-dev\nliboctomap-dev\nlibfcl-dev\ngifsicle\nlibfreetype6-dev\nlibpng-dev\n```\n\n### Python Dependencies\nFollow the instructions above to install the necessary dependencies for your use case (either the `train`, `bench`, or `policy` options).\n\n### Train a SceneCollisionNet\nEdit `cfg/train_scenecollisionnet.yaml`, then run:\n```shell\nPYOPENGL_PLATFORM=egl python tools/train_scenecollisionnet.py\n```\n\n### Train a RobotCollisionNet\nEdit `cfg/train_robotcollisionnet.yaml`, then run:\n```shell\npython tools/train_robotcollisionnet.py\n```\n\n### Benchmark a SceneCollisionNet\nEdit `cfg/benchmark_scenecollisionnet.yaml`, then run:\n```shell\nPYOPENGL_PLATFORM=egl python tools/benchmark_scenecollisionnet.py\n```\n\n### Benchmark a RobotCollisionNet\nEdit `cfg/benchmark_robotcollisionnet.yaml`, then run:\n```shell\npython tools/benchmark_robotcollisionnet.py\n```\n\n### Benchmark FCL or SDF Baselines\nEdit `cfg/benchmark_baseline.yaml`, then run:\n```shell\nPYOPENGL_PLATFORM=egl python tools/benchmark_baseline.py\n```\n\n## Policy Rollout\nTo view a rearrangement MPPI policy rollout in a simulated Isaac Gym tabletop environment, run the following command (note that this requires a local machine with an available GPU and display):\n```shell\npython tools/rollout_policy.py --self-coll-nn weights/self_coll_nn --scene-coll-nn weights/scene_coll_nn --control-frequency 1\n```\nThere are many possible options for this command that can be viewed using the `--help` command line argument and set with the appropriate argument. If you get `RuntimeError: CUDA out of memory`, try reducing the horizon (`--mppi-horizon`, default 40), number of trajectories (`--mppi-num-rollouts`, default 200) or collision steps (`--mppi-collision-steps`, default 10). Note that this may affect policy performance.\n\n## Citation\nIf you use this code in your own research, please consider citing:\n```\n@inproceedings{danielczuk2021object,\n  title={Object Rearrangement Using Learned Implicit Collision Functions},\n  author={Danielczuk, Michael and Mousavian, Arsalan and Eppner, Clemens and Fox, Dieter},\n  booktitle={Proc. IEEE Int. Conf. Robotics and Automation (ICRA)},\n  year={2021}\n}\n```\n","funding_links":[],"categories":["Related GitHub Repos"],"sub_categories":["Others"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FNVlabs%2FSceneCollisionNet","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FNVlabs%2FSceneCollisionNet","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FNVlabs%2FSceneCollisionNet/lists"}