{"id":28449145,"url":"https://github.com/stanfordvl/robovat","last_synced_at":"2025-08-01T16:44:43.228Z","repository":{"id":40970721,"uuid":"238611054","full_name":"StanfordVL/robovat","owner":"StanfordVL","description":"RoboVat: A unified toolkit for simulated and real-world robotic task environments.","archived":false,"fork":false,"pushed_at":"2022-11-22T07:54:47.000Z","size":2184,"stargazers_count":67,"open_issues_count":4,"forks_count":16,"subscribers_count":16,"default_branch":"master","last_synced_at":"2025-06-06T14:06:49.185Z","etag":null,"topics":["deep-learning","deep-reinforcement-learning","physics-simulation","robotics"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/StanfordVL.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2020-02-06T04:57:23.000Z","updated_at":"2025-04-13T22:25:42.000Z","dependencies_parsed_at":"2022-08-28T01:50:52.784Z","dependency_job_id":null,"html_url":"https://github.com/StanfordVL/robovat","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/StanfordVL/robovat","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/StanfordVL%2Frobovat","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/StanfordVL%2Frobovat/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/StanfordVL%2Frobovat/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/StanfordVL%2Frobovat/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/StanfordVL","download_url":"https://codeload.github.com/StanfordVL/robovat/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/StanfordVL%2Frobovat/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":260163216,"owners_count":22968218,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deep-learning","deep-reinforcement-learning","physics-simulation","robotics"],"created_at":"2025-06-06T14:06:49.290Z","updated_at":"2025-06-16T12:38:32.759Z","avatar_url":"https://github.com/StanfordVL.png","language":"Python","readme":"\n# RoboVat\n\n[About](#about)  \n[Installation](#installation)  \n[Examples](#examples)  \n[Citation](#citation)  \n\n## About\n\nRoboVat is a tookit for fast development of robotic task environments in simulation and the real world. It provides unified APIs for robot control and perception to bridge the reality gap. Its name is derived from [\u003cem\u003ebrain in a vat\u003c/em\u003e](https://en.wikipedia.org/wiki/Brain_in_a_vat).\n\nCurrently, RoboVat supports [Sawyer](https://www.rethinkrobotics.com/sawyer) robot via [Intera SDK](https://github.com/RethinkRobotics/intera_sdk). The simulatied environments run with [PyBullet](https://github.com/bulletphysics/bullet3/). The codebase is under active development and more environments will be included in the future.\n\n\u003cp align=\"center\"\u003e\u003cimg width=\"80%\" src=\"docs/push_env.png\" /\u003e\u003c/p\u003e\n\n## Installation\n\n1. **Create a virtual environment (recommended)** \n\n\tCreate a new virtual environment in the root directory or anywhere else:\n\t```bash\n\tvirtualenv --system-site-packages -p python .venv\n\t```\n\n\tActivate the virtual environment every time before you use the package:\n\t```bash\n\tsource .venv/bin/activate\n\t```\n\n\tAnd exit the virtual environment when you are done:\n\t```bash\n\tdeactivate\n\t```\n\n2. **Install the package** \n\n  \tUsing pip to install the package:\n\t```bash\n\tpip install robovat\n\t```\n\n  \tThe package can also be installed by running:\n\t```bash\n\tpython setup.py install\n\t```\n\n3. **Download assets** \n\n\tDownload and unzip the assets folder from [Box](https://app.box.com/s/decsiq52cg6w6898ukl60ylmi7gqv2qr) or the FTP link below to the root directory:\n\t```bash\n\twget ftp://cs.stanford.edu/cs/cvgl/robovat/assets.zip\n\twget ftp://cs.stanford.edu/cs/cvgl/robovat/configs.zip\n\tunzip assets.zip\n\tunzip configs.zip\n\t```\n\n\tIf the assets folder is not in the root directory, remember to specify the \n\targument `--assets PATH_TO_ASSETS` when executing the example scripts.\n\n## Examples\n\n### Command Line Interface\n\nA command line interface (CLI) is provided for debugging purposes. We recommend running the CLI to test the simulation environment after installation and data downloading: \n```bash\npython tools/sawyer_cli.py --mode sim\n```\n\nDetailed usage of the CLI are explained in the source code of `tools/sawyer_cli.py`. The simulated and real-world Sawyer robot can be test using these instructions below in the terminal:\n* Visualize the camera images: `v`\n* Mouse click and reach: `c`\n* Reset the robot: `r`\n* Close and open the gripper: `g` and `o`\n\n### Planar Pushing\n\nExecute a planar pushing tasks with a heuristic policy:\n```bash\npython tools/run_env.py --env PushEnv --policy HeuristicPushPolicy --debug 1\n```\n\nTo execute semantic pushing tasks, we can add bindings to the configurations:\n```bash\npython tools/run_env.py --env PushEnv --policy HeuristicPushPolicy --env_config configs/envs/push_env.yaml --policy_config configs/policies/heuristic_push_policy.yaml --config_bindings \"{'TASK_NAME':'crossing','LAYOUT_ID':0}\" --debug 1\n```\n\nTo execute the tasks with pretrained [CAVIN](http://pair.stanford.edu/cavin/) planner, please see [this codebase](https://github.com/stanfordvl/cavin).\n\n### Process Objects for Simulation\n\nMany simulators load bodies in the [URDF](http://wiki.ros.org/urdf/XML) format. Given an [OBJ](https://en.wikipedia.org/wiki/Wavefront_.obj_file) file, the corresponding URDF file can be generated by running:\n```bash\npython tools/convert_obj_to_urdf.py --input PATH_TO_OBJ --output OUTPUT_DIR\n```\n\nTo simulate concave bodies, the OBJ file needs to be processed by convex decomposition. The URDF file of a concave body can be generated using [V-HACD](https://github.com/kmammou/v-hacd/) for convex decomposition by running:\n```bash\npython tools/convert_obj_to_urdf.py --input PATH_TO_OBJ --output OUTPUT_DIR --decompose 1\n```\n\n## Citation\n\nIf you find this code useful for your research, please cite:\n```\n@article{fang2019cavin, \n    title={Dynamics Learning with Cascaded Variational Inference for Multi-Step Manipulation},\n    author={Kuan Fang and Yuke Zhu and Animesh Garg and Silvio Savarese and Li Fei-Fei}, \n    journal={Conference on Robot Learning (CoRL)}, \n    year={2019} \n}\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fstanfordvl%2Frobovat","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fstanfordvl%2Frobovat","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fstanfordvl%2Frobovat/lists"}