{"id":13441656,"url":"https://github.com/ZwwWayne/mmMOT","last_synced_at":"2025-03-20T12:32:05.748Z","repository":{"id":134612389,"uuid":"201268899","full_name":"ZwwWayne/mmMOT","owner":"ZwwWayne","description":"[ICCV2019] Robust Multi-Modality Multi-Object Tracking","archived":false,"fork":false,"pushed_at":"2019-12-07T05:14:40.000Z","size":2656,"stargazers_count":252,"open_issues_count":18,"forks_count":25,"subscribers_count":24,"default_branch":"master","last_synced_at":"2024-10-26T21:35:17.161Z","etag":null,"topics":["iccv2019","mot","multi-modality"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/ZwwWayne.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2019-08-08T13:52:25.000Z","updated_at":"2024-08-11T13:31:25.000Z","dependencies_parsed_at":null,"dependency_job_id":"363fa423-08fb-4017-8f53-94579aeeacf9","html_url":"https://github.com/ZwwWayne/mmMOT","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ZwwWayne%2FmmMOT","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ZwwWayne%2FmmMOT/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ZwwWayne%2FmmMOT/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ZwwWayne%2FmmMOT/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/ZwwWayne","download_url":"https://codeload.github.com/ZwwWayne/mmMOT/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":221760094,"owners_count":16876351,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["iccv2019","mot","multi-modality"],"created_at":"2024-07-31T03:01:36.579Z","updated_at":"2024-10-28T01:31:45.800Z","avatar_url":"https://github.com/ZwwWayne.png","language":"Python","readme":"# Robust Multi-Modality Multi-Object Tracking\n\nThis is the project page for our ICCV2019 paper: **Robust Multi-Modality Multi-Object Tracking**.\n\n**Authors**: [Wenwei Zhang](http://zhangwenwei.cn), [Hui Zhou](https://scholar.google.com/citations?user=i35tdbMAAAAJ\u0026hl=zh-CN), [Shuyang Sun](https://kevin-ssy.github.io/), [Zhe Wang](https://wang-zhe.me/), [Jianping Shi](http://shijianping.me/), [Chen Change Loy](http://personal.ie.cuhk.edu.hk/~ccloy/)\n\n[[ArXiv]](https://arxiv.org/abs/1909.03850)\u0026nbsp;  [[Project Page]](#)\u0026nbsp;  [[Poster]](http://zhangwenwei.cn/files/mmMOT_poster_final.pdf)\n\n## Introduction\n\nIn this work, we design a generic sensor-agnostic multi-modality MOT framework (mmMOT), where each modality (i.e., sensors) is capable of performing its role independently to preserve reliability, and further improving its accuracy through a novel multi-modality fusion module. Our mmMOT can be trained in an end-to-end manner, enables joint optimization for the base feature extractor of each modality and an adjacency estimator for cross modality. Our mmMOT also makes the first attempt to encode deep representation of point cloud in data association process in MOT. \n\nFor more details, please refer our [paper](https://arxiv.org/abs/1909.03850).\n\n## Install\n\nThis project is based on pytorch\u003e=1.0, you can install it following the [official guide](https://pytorch.org/get-started/locally/).\n\nWe recommand you to build a new conda environment to run the projects as follows:\n```bash\nconda create -n mmmot python=3.7 cython\nconda activate mmmot\nconda install pytorch torchvision -c pytorch\nconda install numba\n```\n\nThen install packages from pip:\n```bash\npip install -r requirements.txt\n```\n\nYou can also follow the guide to install [SECOND](https://github.com/traveller59/second.pytorch), we use the same environment as that for SECOND.\n\n\n## Usage\n\nWe provide several configs and scripts in the `experiments` directory. \n\nTo evaluate the pretrained models or the reimplemented models you can run command\n```bash\npython -u eval_seq.py --config ${work_path}/config.yaml \\\n--load-path=${work_path}/${model} \\\n--result-path=${work_path}/results \\\n--result_sha=eval\n```\nThe `--result_sha` option is used to distinguish different evaluation attempts.\nYou can also simply run command like\n```\nsh ./experiments/pp_pv_40e_mul_A/eval.sh ${partition}\n```\n\nTo train the model on your own, you can run command\n```\npython -u main.py --config ${work_path}/config.yaml \\\n--result-path=${work_path}/results \n```\nYou can also simply run command like\n```\nsh ./experiments/pp_pv_40e_mul_A/train.sh ${partition}\n```\n\n**Note:** Both the train and eval scripts use srun as default, you can just comment them if you do not use srun.\n\n\n## Pretrain Model\n\nWe provide four models in the [google drive](https://drive.google.com/open?id=1IJ6rWSJw-BExQP-N25RNmQzUeTYSmwj6). \nThe corresponding configs can be found in the `experiments` directory.\n\nFollowing the usage you can directly inference the model and get results as follows:\n\n\n|    Name    |  Method  | MOTA |\n| :-----------: | :-----: | :--: |\n|pp_pv_40e_mul_A|Fusion Module A| 77.57|\n|pp_pv_40e_mul_B|Fusion Module B| 77.62|\n|pp_pv_40e_mul_C|Fusion Module C| 78.18|\n|pp_pv_40e_dualadd_subabs_C|Fusion Module C++| 80.08|\n\nThe results of Fusion Module A,B and C are the same as those in the Table 1 of the paper.\nThe Fusion Module C++ indicates that it uses `absolute subtraction` and `softmax with addition` to improve the results, and has the same MOTA as that in the last row of Table 3 of the [paper](https://arxiv.org/abs/1909.03850).\n\n\n## Data\n\nCurrently it supports [PointPillar](https://github.com/nutonomy/second.pytorch)/[SECOND](https://github.com/traveller59/second.pytorch) detector, and also support [RRC-Net](https://github.com/xiaohaoChen/rrc_detection) detector.\n\nIn the [paper](https://arxiv.org/abs/1909.03850), we train a PointPillars model to obtain the train/val detection results for ablation study, using the [official codebase](https://github.com/nutonomy/second.pytorch). The detection data are provided in the [google drive](https://drive.google.com/open?id=1IJ6rWSJw-BExQP-N25RNmQzUeTYSmwj6). Once you download the two pkl files, put them in the `data` directory.\n\nWe also provide the data split used in our paper in the `data` directory. You need to download and unzip the data from the [KITTI Tracking Benchmark](http://www.cvlibs.net/datasets/kitti/eval_tracking.php) and put them in the `kitti_t_o` directory or any path you like.\nDo remember to change the path in the configs.\n\nThe RRC detection are obtained from the [link](https://drive.google.com/file/d/1ZR1qEf2qjQYA9zALLl-ZXuWhqG9lxzsM/view) provided by [MOTBeyondPixels](https://github.com/JunaidCS032/MOTBeyondPixels). We use RRC detection for the [KITTI Tracking Benchmark](http://www.cvlibs.net/datasets/kitti/eval_tracking.php).\n\n\n## Citation\n\nIf you use this codebase or model in your research, please cite:\n```\n@InProceedings{mmMOT_2019_ICCV,\n    author = {Zhang, Wenwei and Zhou, Hui and Sun, Shuyang, and Wang, Zhe and Shi, Jianping and Loy, Chen Change},\n    title = {Robust Multi-Modality Multi-Object Tracking},\n    booktitle = {The IEEE International Conference on Computer Vision (ICCV)},\n    month = {October},\n    year = {2019}\n}\n```\n\n## Acknowledgement\n\nThis code benefits a lot from [SECOND](https://github.com/traveller59/second.pytorch) and use the detection results provided by [MOTBeyondPixels](https://github.com/JunaidCS032/MOTBeyondPixels). The GHM loss implementation is from [GHM_Detection](https://github.com/libuyu/GHM_Detection).\n\n","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FZwwWayne%2FmmMOT","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FZwwWayne%2FmmMOT","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FZwwWayne%2FmmMOT/lists"}