{"id":20791359,"url":"https://github.com/hyperplane-lab/dmotion-code","last_synced_at":"2025-10-08T19:26:51.634Z","repository":{"id":71880100,"uuid":"388813943","full_name":"hyperplane-lab/dmotion-code","owner":"hyperplane-lab","description":"DMotion: Robotic Visuomotor Control with Unsupervised Forward Model Learned from Videos , IROS 2021","archived":false,"fork":false,"pushed_at":"2021-07-23T14:28:01.000Z","size":3413,"stargazers_count":4,"open_issues_count":0,"forks_count":1,"subscribers_count":2,"default_branch":"master","last_synced_at":"2025-05-05T21:27:58.578Z","etag":null,"topics":["iros","robotics"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/hyperplane-lab.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-07-23T13:39:23.000Z","updated_at":"2023-10-31T13:54:14.000Z","dependencies_parsed_at":null,"dependency_job_id":"2b07cb8b-3927-4a84-9efe-1ae37df20879","html_url":"https://github.com/hyperplane-lab/dmotion-code","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/hyperplane-lab/dmotion-code","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hyperplane-lab%2Fdmotion-code","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hyperplane-lab%2Fdmotion-code/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hyperplane-lab%2Fdmotion-code/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hyperplane-lab%2Fdmotion-code/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/hyperplane-lab","download_url":"https://codeload.github.com/hyperplane-lab/dmotion-code/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hyperplane-lab%2Fdmotion-code/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":279000704,"owners_count":26082805,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-08T02:00:06.501Z","response_time":56,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["iros","robotics"],"created_at":"2024-11-17T15:44:16.403Z","updated_at":"2025-10-08T19:26:51.617Z","avatar_url":"https://github.com/hyperplane-lab.png","language":"Python","readme":"## DMotion\n### Paper\n\nHaoqi Yuan, Ruihai Wu, Andrew Zhao, Haipeng Zhang, Zihan Ding, Hao Dong, [\"DMotion: Robotic Visuomotor Control with Unsupervised Forward Model Learned from Videos\"](https://arxiv.org/abs/2103.04301), IROS 2021\n\n[arXiv](https://arxiv.org/abs/2103.04301) \n\n[project page](https://hyperplane-lab.github.io/dmotion/)\n\n## Experiments in Grid World, Atari Pong\n\n### Dependencies:\n- Ubuntu \n- python 3.7\n- dependencies: PIL, numpy, tqdm, gym, pytorch (1.2.0), matplotlib\n\n\n### (a). Generate datasets:\nGrid World: \n\n`python datagen/gridworld.py`\n\nAtari Pong:  require gym[atari] to be installed. \n\n`python datagen/ataripong.py `\n\n\n### (b). Train the model from unlabelled videos:\n\nGrid World: \n\n`python train.py --gpu 0 --dataset_name shape --data_path datagen/gridworld`\n\nAtari Pong: \n\n`python train.py --gpu 0 --dataset_name pong --data_path datagen/atari-pong`\n\nSet `--gpu -1` if use cpu. \n\n### (c). Find action-transformation mapping and test:\nOne transition sample for each action can be manually selected. In folder `datagen/demonstration`, \nwe have one directory for each dataset to contain the demonstrations. Directory names are `shape`, `pong`, with respect to two datasets.\n\nWe provide you the demonstrations for all datasets. For example, for Grid World dataset, directory `datagen/demonstration/shape` contains a file `demo.txt`, in which each line consists of {img1, img2, action}, showing a transition of this action. Image files img1, img2 are placed in directory `datagen/demonstration/shape`. \n\n You can either use the demonstrations provided by us, or arbitrarily replace them with samples in the generated dataset manually. \n\n\nGrid World: \n\n`python test.py --gpu 0 --dataset_name shape --data_path datagen/gridworld`\n\nAtari Pong: \n\n`python test.py --gpu 0 --dataset_name pong --data_path datagen/atari-pong`\n\nSet `--gpu -1` if use cpu. You can select the model for test using e.g., `--test_model_path checkpoint/model_49.pth`. \nThe test program will sequentially run the test of feature map visualisation, visual forecasting conditioned on the agent's motion and quantitative test. If you want to disable them, add `--visualize_feature 0`, `--visual_forecasting 0` or `--quantitative 0`, respectively.\n\n\n## Experiments in Robot Pushing and MPC\n\n### Dependencies:\n\nPlease visit https://github.com/stepjam/PyRep, and follow the installation on their page to install Pyrep and CoppeliaSim.\n\n### Generate Dataset and Train:\n\n1. Use `python -m datagen.roboarm --increment 20` to generate trajectories, as the simulation gets slower, please only generate around 20-100 (--increment) trajectories at a time. When generating a second batch, please use `python -m datagen.roboarm --increment 20 --eps [number of eps generated before + increment]` for the sake of naming convention.\n2. [Optional] If generated multiple batches of trajectories, use `python -m rename --folders [number of folders you generated in step 1]` to move all files into a single folder.\n3. Train: `python -m train --dataset_name roboarm --data_path datagen/roboarm --size 64 --no_graph --contrastive_coeff 0 --save_path checkpoint`\n4. Test: `python -m test --dataset_name roboarm --data_path datagen/roboarm --size 64 --no_graph --contrastive_coeff 0 --test_model_path checkpoint/model_49.pth --test_save_path test`\n\n### Model Predictive Control\n\n#### Generate data for MPC\n\nCreate csv file logging the trajectory IDs and time step you wish to perform MPC on  \n\n\n   | episode | start_idx | end_idx |\n   | ------- | --------- | ------- |\n   | 200     | 0         | 29      |\n   | 201     | 8         | 37      |\n\nRun `python -m rename_mpc --folders 1` to combine all json state files into one for MPC dataset. Change the `--folders` argument according to the maximum ID of the trajectories manually selected. Run `python -m move_mpc` to get all trajectories to the right place.\n\n#### Test for MPC\n\n `python -m mpc_multi`\n \n \n## Citation\n\nIf you find this code useful for your research, please cite our paper:\n\n```\n@article{yuan2021robotic,\n  title={Robotic Visuomotor Control with Unsupervised Forward Model Learned from Videos},\n  author={Yuan, Haoqi and Wu, Ruihai and Zhao, Andrew and Zhang, Haipeng and Ding, Zihan and Dong, Hao},\n  journal={arXiv preprint arXiv:2103.04301},\n  year={2021}\n}\n```","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhyperplane-lab%2Fdmotion-code","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fhyperplane-lab%2Fdmotion-code","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhyperplane-lab%2Fdmotion-code/lists"}