{"id":13443204,"url":"https://github.com/Toytiny/RaFlow","last_synced_at":"2025-03-20T16:30:50.156Z","repository":{"id":39632492,"uuid":"462320792","full_name":"Toytiny/RaFlow","owner":"Toytiny","description":"[RA-L \u0026 IROS'22] Self-Supervised Scene Flow Estimation with 4-D Automotive Radar ","archived":false,"fork":false,"pushed_at":"2023-03-18T20:58:59.000Z","size":59833,"stargazers_count":63,"open_issues_count":0,"forks_count":12,"subscribers_count":4,"default_branch":"master","last_synced_at":"2024-10-28T06:57:44.487Z","etag":null,"topics":["automotive-radar","autonomous-driving","deep-learning","point-cloud","pytorch","scene-flow-estimation","self-supervised"],"latest_commit_sha":null,"homepage":"https://toytiny.github.io/publication/22-raflow-ral/index.html","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Toytiny.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2022-02-22T13:56:33.000Z","updated_at":"2024-10-14T03:17:54.000Z","dependencies_parsed_at":"2024-10-28T05:10:38.540Z","dependency_job_id":null,"html_url":"https://github.com/Toytiny/RaFlow","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Toytiny%2FRaFlow","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Toytiny%2FRaFlow/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Toytiny%2FRaFlow/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Toytiny%2FRaFlow/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Toytiny","download_url":"https://codeload.github.com/Toytiny/RaFlow/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":244649727,"owners_count":20487478,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["automotive-radar","autonomous-driving","deep-learning","point-cloud","pytorch","scene-flow-estimation","self-supervised"],"created_at":"2024-07-31T03:01:57.496Z","updated_at":"2025-03-20T16:30:50.144Z","avatar_url":"https://github.com/Toytiny.png","language":"Python","readme":"\n# Self-Supervised Scene Flow Estimation with 4-D Automotive Radar  \n\n[![arxiv](https://img.shields.io/badge/arXiv-2203.01137-%23B31C1B?style=flat)](https://arxiv.org/abs/2203.01137)  [![ ](https://img.shields.io/youtube/views/5_iJCZytrxo?label=YouTube\u0026style=flat)](https://www.youtube.com/watch?v=5_iJCZytrxo\u0026feature=youtu.be)  [![GitHub](https://img.shields.io/website?label=Project%20Page\u0026up_message=RaFlow\u0026url=https://toytiny.github.io/publication/22-raflow-ral/index.html)](https://toytiny.github.io/publication/22-raflow-ral/index.html)\n\nThis repository is the official implementation of RaFlow (IEEE RA-L \u0026 IROS'22), a robust method for scene flow estimation on 4-D radar point clouds with self-supervised learning. [[Paper]](https://ieeexplore.ieee.org/document/9810356) [[Video]](https://youtu.be/5_iJCZytrxo)\n\n![](doc/pipeline.png)\n\n\n## News\n\n[2022-10] We run our method on the publicly available [View-of-Delft (VoD)](https://github.com/tudelft-iv/view-of-delft-dataset) dataset. A video demo can be found at [Video Demo](#video-demo). Please see [Running](#running) for how to experiment with the VoD dataset.\n\n[2023-03] Our latest work \"Hidden Gems: 4D Radar Scene Flow Learning Using Cross-Modal Supervision\" has been accepted by CVPR 2023. Please see [CMFlow](https://github.com/Toytiny/CMFlow) for more details and find out how to run RaFlow on VoD dataset. \n\n\n## Abstract\n\nScene flow allows autonomous vehicles to reason about the arbitrary motion of multiple independent objects which is the key to long-term mobile autonomy. While estimating the scene flow from LiDAR has progressed recently, it remains largely unknown how to estimate the scene flow from a 4-D radar - an increasingly popular automotive sensor for its robustness against adverse weather and lighting conditions. Compared with the LiDAR point clouds, radar data are drastically sparser, noisier and in much lower resolution. Annotated datasets for radar scene flow are also in absence and costly to acquire in the real world. These factors jointly pose the radar scene flow estimation as\na challenging problem. This work aims to address the above challenges and estimate scene flow from 4-D radar point clouds by leveraging self-supervised learning. A robust scene flow estimation architecture and three novel losses are bespoken designed to cope with intractable radar data. Real-world experimental results validate that our method is able to robustly estimate the radar scene flow in the wild and effectively supports the downstream task of motion segmentation.\n\n\n## Citation\n\nIf you found our work useful for your research, please consider citing:\n\n```\n@article{ding2022raflow,\n  author={Ding, Fangqiang and Pan, Zhijun and Deng, Yimin and Deng, Jianning and Lu, Chris Xiaoxuan},\n  journal={IEEE Robotics and Automation Letters}, \n  title={Self-Supervised Scene Flow Estimation With 4-D Automotive Radar}, \n  year={2022},\n  pages={1-8},\n  doi={10.1109/LRA.2022.3187248}}\n}\n```\n\n## Video Demo\n\nA short video demo showing our qualitative results on the View-of-Delft dataset (click the figure to see):\n\u003cdiv align=\"center\"\u003e\n  \u003ca href=\"https://drive.google.com/file/d/1rTDdiY5hJ1FleN1K7YjnP15MpMebaShb/view?usp=sharing\"\u003e\u003cimg src=\"doc/demo_cover.png\" width=\"100%\" alt=\"Click the figure below to see the video\"\u003e\u003c/a\u003e\n\u003c/div\u003e\n\n\n## Visualization\n\n#### a. Scene Flow\n\nMore qualititative results can be found in [Results Visualization](/doc/supply_qual.md).\n\n\u003cimg src=\"doc/qual.png\" width=\"80%\"\u003e\n\n\n#### b. Motion Segmentation\n\n\n\u003cimg src=\"doc/motion_seg.png\" width=\"80%\"\u003e\n\n## Installation\n\n\u003e Note: the code in this repo has been tested on Ubuntu 16.04/18.04 with Python 3.7, CUDA 11.1, PyTorch 1.7. It may work for other setups, but has not been tested.\n\nPlease follow the steps below to build up your environment. Make sure that you correctly install GPU driver and CUDA before setting up.\n\n#### a. Clone the repository to local\n\n```\ngit clone https://github.com/Toytiny/RaFlow\n```\n\n#### b. Set up a new environment with Anaconda\n\n```\nconda create -n YOUR_ENV_NAME python=3.7\nsource activate YOUR_ENV_NAME\n```\n\n#### c. Install common dependicies\n\n```\nconda install pytorch==1.7.0 torchvision==0.8.0 torchaudio==0.7.0 cudatoolkit=11.0 -c pytorch\npip install -r requirements.txt\n```\n\n#### d. Install [PointNet++](https://github.com/sshaoshuai/Pointnet2.PyTorch) library for basic point cloud operation\n\n```\ncd lib\npython setup.py install\ncd ..\n```\n\n## Running\n\n### a. Inhouse data \n\nThe main experiments are conducted on our inhouse dataset. Our trained model can be found at `./checkpoints/raflow/models`. Besides, we also provide a few testing, training and valiation data under `./demo_data/` for users to run.\n\nFor evaluation on inhouse test data, please run\n\n```\npython main.py --eval --vis --dataset_path ./demo_data/ --exp_name raflow\n```\n\nThe results visualization at bird's eye view (BEV) will be saved under `./checkpoints/raflow/test_vis_2d/`. Experiment configuration can be further modified at `./configs.yaml`.\n\nFor training new model, please run\n\n```\npython main.py --dataset_path ./demo_data/ --exp_name raflow_new\n```\n\nSince only limited inhouse data is provided in this repository, we recommend the users to collect their own data or use recent public datasets for large-scale training and testing.\n\n\n### b. VoD data\n\nWe also run our method on the public View-of-Delft (VoD) dataset. To start, please first request the access from their [official webiste](https://github.com/tudelft-iv/view-of-delft-dataset) and download their data and annotations. Before experiments, please put your preprocessed scene flow samples under `./vod_data/` and split them into training, validation and testing sets. \n\nHere we provide our trained model on the VoD dataset under `./checkpoints/raflow_vod/models`. For evaluating this model on VoD, please run the following code:\n\n```\npython main.py --eval --vis --dataset_path ./vod_data/ --model raflow_vod --exp_name raflow_vod --dataset vodDataset\n```\n\nFor training your own model, please run:\n\n```\npython main.py --dataset_path ./vod_data/ --model raflow_vod --exp_name raflow_vod_new --dataset vodDataset\n```\n\n**We provide the instructions on how to run RaFlow on the VoD at the [GETTING_STARTED](https://github.com/Toytiny/CMFlow/blob/master/src/GETTING_STARTED.md) of our CVPR'23 work. Please follow the steps there to run RaFlow or our new method CMFlow.** \n\n## Acknowledgments\nThis repository is based on the following codebases.  \n\n* [PointPWC](https://github.com/DylanWusee/PointPWC)\n* [FlowNet3D_PyTorch](https://github.com/hyangwinter/flownet3d_pytorch)\n* [PointNet++_PyTorch](https://github.com/sshaoshuai/Pointnet2.PyTorch)\n\n","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FToytiny%2FRaFlow","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FToytiny%2FRaFlow","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FToytiny%2FRaFlow/lists"}