Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Collaborative_Perception

This repository is a paper digest of recent advances in collaborative / cooperative / multi-agent perception for V2I / V2V / V2X autonomous driving scenario.
https://github.com/Little-Podi/Collaborative_Perception

  • [paper - to-Everything Autonomous Driving: A Survey on Collaborative Perception [[paper](https://arxiv.org/abs/2308.16714)], Collaborative Perception in Autonomous Driving: Methods, Datasets and Challenges [[paper](https://arxiv.org/abs/2301.06262)], A Survey and Framework of Cooperative Perception: From Heterogeneous Singleton to Hierarchical Cooperation [[paper](https://arxiv.org/abs/2208.10590)]
  • [video - Driving [[video](https://youtu.be/WUDxYn7QtJU)], When Vision Transformers Meet Cooperative Perception [[video](https://youtu.be/rLAU4eqoOIU)], Scene Understanding beyond the Visible [[video](https://youtu.be/oz0AnmJZCR4)], Robust Collaborative Perception against Communication Interruption [[video](https://youtu.be/3cIWpMrsyeE)], Collaborative and Adversarial 3D Perception for Autonomous Driving [[video](https://youtu.be/W-AONQMfGi0)], Vehicle-to-Vehicle Communication for Self-Driving [[video](https://youtu.be/oikdOpmIoc4)], Adversarial Robustness for Self-Driving [[video](https://youtu.be/8uBFXzyII5Y)], L4感知系统的终极形态:协同驾驶 [[video](https://youtu.be/NvixMEDHht4)], CoBEVFlow-解决车-车/路协同感知的时序异步问题 [[video](https://youtu.be/IBTgalAjye8)], 新一代协作感知Where2comm减少通信带宽十万倍 [[video](https://youtu.be/i5coMk4hkuk)], 从任务相关到任务无关的多机器人协同感知 [[video](https://course.zhidx.com/c/MDlkZjcyZDgwZWI4ODBhOGQ4MzM=)], 协同自动驾驶:仿真与感知 [[video](https://course.zhidx.com/c/MmQ1YWUyMzM1M2I3YzVlZjE1NzM=)], 基于群体协作的超视距态势感知 [[video](https://www.koushare.com/video/videodetail/33015)]
  • [code - mobility/OpenCDA)] [[doc](https://opencda-documentation.readthedocs.io/en/latest)]
  • [web - page.github.io)], Zixing Lei@SJTU [[web](https://chezacar.github.io)], Yifan Lu@SJTU [[web](https://yifanlu0227.github.io)], Siqi Fan@THU [[web](https://leofansq.github.io)], Hang Qiu@Waymo [[web](https://hangqiu.github.io)], Dian Chen@UT Austin [[web](https://www.cs.utexas.edu/~dchen)], Yen-Cheng Liu@GaTech [[web](https://ycliu93.github.io)], Tsun-Hsuan Wang@MIT [[web](https://zswang666.github.io)]
  • [web
  • [video - Aware Communication for Multi-Agent Reinforcement Learning [[video](https://youtu.be/YBgW2oA_n3k)], A Survey of Multi-Agent Reinforcement Learning with Communication [[paper](https://arxiv.org/abs/2203.08975)]
  • [paper
  • [paper
  • [paper
  • [paper
  • [paper
  • [paper
  • [paper
  • [paper
  • [paper - THU/UniV2X)]
  • [paper
  • [paper
  • [paper - THU/V2X-Graph)]
  • [paper
  • [paper
  • [paper - traffic-dataset/coopdet3d)]
  • [code - mobility/OpenCDA)] [[doc](https://opencda-documentation.readthedocs.io/en/latest)]
  • [paper&review
  • [paper
  • [paper - V2X)]
  • [paper - wang1996/DeepAccident)]
  • [paper
  • [paper - AutoDrive/BEVHeight)]
  • [paper - SJTU/CoCa3D)]
  • [paper - THU/DAIR-V2X-Seq)]
  • [paper&review - SJTU/CoBEVFlow)]
  • [paper&review - yu/FFNet-VIC3D)]
  • [paper&review
  • [paper
  • [paper - ViT)]
  • [paper
  • [paper
  • [paper
  • [paper - lab/UMC)]
  • [paper&review - Chen/CO3)]
  • [paper&review
  • [paper
  • [paper
  • [paper
  • [paper - Feature-Fusion-for-Cooperative-Perception)]
  • [paper
  • [paper
  • [paper - m-quantification)]
  • [paper
  • [paper
  • [paper
  • [paper
  • [paper - Austin-RPL/Coopernaut)]
  • [paper
  • [paper - THU/DAIR-V2X)]
  • [paper&review - SJTU/where2comm)]
  • [paper - SJTU/SyncNet)]
  • [paper - vit)]
  • [paper&review
  • [paper&review
  • [paper
  • [paper
  • [paper
  • [paper
  • [paper&review
  • [paper
  • [paper
  • [paper - RIPL/MultiAgentPerception)]
  • [paper
  • [paper
  • [paper
  • [paper - RIPL/MultiAgentPerception)]
  • [paper
  • [paper
  • [paper - THU/V2X-Graph)] [[project](https://thudair.baai.ac.cn/index)]
  • [paper
  • [paper - THU/DAIR-RCooper)] [[project](https://www.t3caic.com/qingzhen)]
  • [paper - traffic-dataset/tum-traffic-dataset-dev-kit)] [[project](https://tum-traffic-dataset.github.io/tumtraf-v2x)]
  • [paper&review - H)]
  • [paper - wang1996/DeepAccident)] [[project](https://deepaccident.github.io)]
  • [paper - SJTU/CoCa3D)] [[project](https://siheng-chen.github.io/dataset/CoPerception+)]
  • [paper - SJTU/CoCa3D)] [[project](https://siheng-chen.github.io/dataset/CoPerception+)]
  • [paper - mobility/V2V4Real)] [[project](https://mobility-lab.seas.ucla.edu/v2v4real)]
  • [paper - THU/DAIR-V2X-Seq)] [[project](https://thudair.baai.ac.cn/index)]
  • [paper&review
  • [paper
  • [paper - chen.github.io/dataset/dair-v2x-c-complemented)]
  • [paper - ADG/LiDARSimLib-and-Placement-Evaluation)] [~~project~~]
  • [paper - ASG)] [~~project~~]
  • [paper - dataset)]
  • [paper - THU/DAIR-V2X)] [[project](https://thudair.baai.ac.cn/index)]
  • [paper&review - SJTU/where2comm)] [[project](https://siheng-chen.github.io/dataset/coperception-uav)]
  • [paper - vit)] [[project](https://drive.google.com/drive/folders/1r5sPiBEvo8Xby-nMaWUTnJIPK6WhY1B6)]
  • [paper - lab.seas.ucla.edu/opv2v)]
  • [paper - dataset.net)]
  • [paper - Sim)] [[project](https://ai4ce.github.io/V2X-Sim)]
  • [paper - simulator/carla)] [[project](https://carla.org)]