Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/keithAND2020/awesome-Occupancy-research
Papers on occupation, including monocular and multi-view in autonomous driving scenarios
https://github.com/keithAND2020/awesome-Occupancy-research
List: awesome-Occupancy-research
Last synced: 16 days ago
JSON representation
Papers on occupation, including monocular and multi-view in autonomous driving scenarios
- Host: GitHub
- URL: https://github.com/keithAND2020/awesome-Occupancy-research
- Owner: keithAND2020
- Created: 2024-01-04T11:20:58.000Z (12 months ago)
- Default Branch: main
- Last Pushed: 2024-04-24T07:27:27.000Z (8 months ago)
- Last Synced: 2024-05-23T02:10:11.248Z (7 months ago)
- Size: 65.4 KB
- Stars: 32
- Watchers: 1
- Forks: 1
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- ultimate-awesome - awesome-Occupancy-research - Papers on occupation, including monocular and multi-view in autonomous driving scenarios. (Other Lists / Monkey C Lists)
README
### occupancy
## origin
#### 1 first article
Occupancy Networks: Learning 3D Reconstruction in Function Space
The first work in the Occupancy Networks series, in this paper, the author proposes the Occupancy Network, a new learning-based three-dimensional reconstruction method. Currently cited【2275】
#### 2 tesla AI day
[1] [聊一聊Tesla 的Occupancy Network - 知乎 (zhihu.com)](https://zhuanlan.zhihu.com/p/575953155)
[2]https://www.youtube.com/watch?v=ODSJsviD_SU&t=12s
[3]https://www.youtube.com/watch?v=KC8e0oTFUcw
## develop
#### time sequence
##### 【CVPR 2024】
> **1、** Collaborative Semantic Occupancy Prediction with Hybrid Feature Fusion in Connected Automated Vehicles[[paper]](https://openreview.net/forum?id=eBXM62SqKY)
>> **2、** Cam4DOcc: Benchmark for Camera-Only 4D Occupancy Forecasting in Autonomous Driving Applications[[paper]](https://arxiv.org/abs/2311.17663)[[code]](https://github.com/haomo-ai/Cam4DOcc)
>> **3、** Accurate Training Data for Occupancy Map Prediction in Automated Driving using Evidence Theory
>> **4、** OccupancyM3D: Learning Occupancy for Monocular 3D Object Detection[[paper]](https://arxiv.org/abs/2305.15694)[[code]](https://github.com/SPengLiang/OccupancyM3D)
>> **5、** PanoOcc: Unified Occupancy Representation for Camera-based 3D Panoptic Segmentation[[paper]](https://arxiv.org/abs/2306.10013)[[code]](https://github.com/Robertwyq/PanoOcc)
>> **6、** COTR: Compact Occupancy TRansformer for Vision-based 3D Occupancy Prediction[[paper]](https://arxiv.org/abs/2312.01919)
>> **7、** COTR: Compact Occupancy TRansformer for Vision-based 3D Occupancy Prediction[[paper]](https://arxiv.org/abs/2312.01919)
>> **8 、** SelfOcc: Self-Supervised Vision-Based 3D Occupancy Prediction[[paper]](https://arxiv.org/abs/2311.12754)[[code]](https://github.com/huang-yh/SelfOcc)
>> **9 、** StreamingFlow: Streaming Occupancy Forecasting with Asynchronous Multi-modal Data Streams via Neural Ordinary Differential Equation
>> **10 、** SGC-Occ: Semantic-Geometry Consistent 3D Occupancy Prediction for Autonomous Driving
>> **11 、** SparseOcc: Rethinking Sparse Latent Representation for Vision-Based Semantic Occupancy Prediction
>> **12 、** Unsupervised Occupancy Learning from Sparse Point Cloud[[paper]](https://arxiv.org/abs/2404.02759)
>> **13 、** UnO: Unsupervised Occupancy Fields for Perception and Forecasting
>> **14 、** Diffusion-FOF: Single-view Clothed Human Reconstruction via Diffusion-based Fourier Occupancy Field
>
##### 【2024】> **1、[2024-01-17][NIPS2023]** Pop-3d: Open-vocabulary 3d occupancy prediction from images[[paper]](https://openreview.net/forum?id=eBXM62SqKY)[[code]](https://github.com/vobecant/POP3D)
>> **2、[2024-01-23]** InverseMatrixVT3D: An Efficient Projection Matrix-Based Approach for 3D Occupancy Prediction[[paper]](https://arxiv.org/pdf/2401.12422.pdf)
>> **3、[2024-01-24]** S2TPVFormer: Spatio-Temporal Tri-Perspective View for temporally coherent 3D Semantic Occupancy Prediction[[paper]](https://arxiv.org/pdf/2401.13785.pdf)
>> **4、[2024-02-20]** OccFlowNet: Towards Self-supervised Occupancy Estimation via Differentiable Rendering and Occupancy Flow[[paper]](https://arxiv.org/pdf/2402.12792.pdf)
>> **5、[2024-03-03]** OccFusion: A Straightforward and Effective Multi-Sensor Fusion Framework for 3D Occupancy Prediction[[paper]](https://arxiv.org/pdf/2403.01644.pdf)[[code]](https://github.com/DanielMing123/OCCFusion)
>> **6、[2024-03-05][ICRA2024]** FastOcc: Accelerating 3D Occupancy Prediction by Fusing the 2D Bird's-Eye View and Perspective View[[paper]](https://arxiv.org/pdf/2403.02710.pdf)
>
##### 【2023】
> **1+、[2023-12-28]** Fully Sparse 3D Occupancy Prediction[[paper]](https://arxiv.org/abs/2312.17118)[[code]](https://github.com/MCG-NJU/SparseOcc)
>> **1、[2023-12-14][ICCV2023]** OccNeRF:Rendering Humans from Object-Occluded Monocular Videos[[paper]](https://arxiv.org/abs/2312.09243)[[code]](https://github.com/LinShan-Bin/OccNeRF)
>
> **2、[2023-12-09]** OctreeOcc: Efficient and Multi-Granularity Occupancy Prediction Using Octree Queries[[paper]](https://arxiv.org/abs/2312.03774)
>
> **3、[2023-11-29]** SelfOcc: Self-Supervised Vision-Based 3D Occupancy Prediction[[paper]](https://arxiv.org/abs/2311.12754)[[code]](https://github.com/huang-yh/SelfOcc)
>
> **4、[2023-11-19]** SOccDPT: Semi-Supervised 3D Semantic Occupancy from Dense Prediction Transformers trained under memory constraints[[paper]](https://arxiv.org/abs/2311.11371)
>
> **5、[2023-11-18]** FlashOcc:Fast and Memory-Efficient Occupancy Prediction via Channel-to-Height Plugin[[paper]](https://arxiv.org/abs/2311.12058)
>
> **6、[2023-11-16]** A Simple Attempt for 3D Occupancy Estimation in Autonomous Driving[[paper]](https://arxiv.org/abs/2303.10076)[[code]](https://github.com/GANWANSHUI/SimpleOccupancy)
>
> **7、[2023-10-09]** Occupancy-MAE:Occupancy-MAE: Self-Supervised Pre-Training Large-Scale LiDAR Point Clouds With Masked Occupancy Autoencoders[[paper]](https://ieeexplore.ieee.org/abstract/document/10273603/?casa_token=RKAYiCgprDgAAAAA:eq6K85lc3pij8TkcyH8UXCVn2_iy-vNZ04ywQaDt6Nbk70Chbzx8SvZ5I1MPEgyc2K4BtEXW7Att)[[code]](https://github.com/chaytonmin/Occupancy-MAE )
>
> **8、[2023-survey]** Grid-Centric Traffic Scenario Perception for Autonomous Driving: A Comprehensive Review[[paper]](https://arxiv.org/abs/2303.01212)
>
> **9、[2023-10-09]** Occ-BEV: Multi-Camera Unified Pre-training via 3D Scene Reconstruction[[paper]](https://arxiv.org/abs/2305.18829)[[code]](https://github.com/chaytonmin/UniScene)
>
> **10、[2023-09-22]** OccupancyDETR: Making Semantic Scene Completion as Straightforward as Object Detection[[paper]](https://arxiv.org/abs/2309.08504)[[code]](https://github.com/jypjypjypjyp/OccupancyDETR)
>
> **11、[2023-09-18]** RenderOcc: Vision-Centric 3D Occupancy Prediction with 2D Rendering Supervision[[paper]](https://arxiv.org/abs/2309.09502)[[code]](https://github.com/pmj110119/RenderOcc)
>
> **12、[2023-08-13]** PointOcc: Cylindrical Tri-Perspective View for Point-based 3D Semantic Occupancy Prediction[[paper]](https://arxiv.org/abs/2308.16896)[[code]](https://github.com/wzzheng/PointOcc)
>
> **13、[2023-08-27]** SurroundOcc: Multi-Camera 3D Occupancy Prediction for Autonomous Driving[[paper]](http://openaccess.thecvf.com/content/ICCV2023/html/Wei_SurroundOcc_Multi-camera_3D_Occupancy_Prediction_for_Autonomous_Driving_ICCV_2023_paper.html)[[code]](https://github.com/weiyithu/SurroundOcc)
>
> **14、[2023-06-26][ICCV 2023]** OccNet: Scene as Occupancy[[paper]](http://openaccess.thecvf.com/content/ICCV2023/html/Tong_Scene_as_Occupancy_ICCV_2023_paper.html)[[code]](https://github.com/OpenDriveLab/OpenScene)
>
> **15、[2023-06-16]** PanoOcc: Unified Occupancy Representation for Camera-based 3D Panoptic Segmentation[[paper]](https://arxiv.org/abs/2306.10013)[[code]](https://github.com/Robertwyq/PanoOcc)
>
> **16、[2023-06-15]** UniOcc: Unifying Vision-Centric 3D Occupancy Prediction with Geometric and Semantic Rendering [[paper]](https://arxiv.org/abs/2306.09117)
>
> **17、[2023-05-25]** OccupancyM3D: Learning Occupancy for Monocular 3D Object Detection[[paper]](https://arxiv.org/abs/2305.15694)[[code]](https://github.com/SPengLiang/OccupancyM3D)
>
> **18、[2023-04-19][2023CVPR]** Behind the Scenes: Density Fields for Single View Reconstruction【cite:13】[[paper]](http://openaccess.thecvf.com/content/CVPR2023/html/Wimbauer_Behind_the_Scenes_Density_Fields_for_Single_View_Reconstruction_CVPR_2023_paper.html)[[code]](fwmb.github.io/bts/)
>
> **19、[2023-04-11]** OccFormer: Dual-path Transformer for Vision-based 3D Semantic Occupancy Prediction[[paper]](https://arxiv.org/abs/2304.05316)[[code]](https://github.com/zhangyp15/OccFormer)
>
> **20、[2023-03-25]** VoxFormer:Voxformer: Sparse voxel transformer for camera-based 3d semantic scene completion[[paper]](http://openaccess.thecvf.com/content/CVPR2023/html/Li_VoxFormer_Sparse_Voxel_Transformer_for_Camera-Based_3D_Semantic_Scene_Completion_CVPR_2023_paper.html)[[code]](https://github.com/NVlabs/VoxFormer)
>
> **21、[2023-03-02]** TPVFormer:Tri-perspective view for vision-based 3d semantic occupancy prediction[[paper]](http://openaccess.thecvf.com/content/CVPR2023/html/Huang_Tri-Perspective_View_for_Vision-Based_3D_Semantic_Occupancy_Prediction_CVPR_2023_paper.html)[[code]](https://github.com/wzzheng/TPVFormer)
>
> **22、[2023-02-27]** OccDepth:Occdepth: A depth-aware method for 3d semantic scene completion[[paper]](https://arxiv.org/abs/2302.13540)[[code]](https://github.com/megvii-research/OccDepth)
>**【2022】**
> 1、**[CVPR 2022]** MonoScene: Monocular 3D Semantic Scene Completion[[paper]](http://openaccess.thecvf.com/content/CVPR2022/html/Cao_MonoScene_Monocular_3D_Semantic_Scene_Completion_CVPR_2022_paper.html)[[code]](https://github.com/cv-rits/MonoScene)
**【2021】**
> 1、Sparse Single Sweep LiDAR Point Cloud Segmentation via Learning Contextual Shape Priors from Scene Completion[[paper]](https://ojs.aaai.org/index.php/AAAI/article/view/16419)
#### In automatic driving according to**enter the classification**:(The following is the number of citations)As of 2024/01/04
> **single image**:MonoScene【72】、OccupancyM3D【1】、VoxFormer【43】、A Simple Attempt for 3D Occupancy Estimation in Autonomous Driving【5】、OccupancyDETR【1】、SOccDPT【0】
>
> **depth image**:2023-OccDepth【22】
>
> **multiview**:TPVFormer【59】、SurroundOcc【31】、OccFormer【16】、OccBEV【2】、PanoOcc【6】、UniOcc【4】、SelfOcc【1】、OccNeRF【0】
>
> **lidar**:PointOcc【2】#### Use Pre-training
> Occupancy-MAE: Self-supervised Pre-training Large-scale LiDAR Point Clouds with Masked Occupancy Autoencoders【8】
>
> Occ-BEV【2】#### Efficient Occupancy
> FlashOcc:Fast and Memory-Efficient Occupancy Prediction via Channel-to-Height Plugin【0】
>
> SOccDPT: Semi-Supervised 3D Semantic Occupancy from Dense Prediction Transformers trained under memory constraints【0】
>
> OctreeOcc: Efficient and Multi-Granularity Occupancy Prediction Using Octree Queries#### occupancy with anything!
> RenderOcc: Vision-Centric 3D Occupancy Prediction with 2D Rendering Supervision【4】
>
> OccupancyDETR: Making Semantic Scene Completion as Straightforward as Object Detection【1】
>
> OccNeRF: Self-Supervised Multi-Camera Occupancy Prediction with Neural Radiance Fields【0】
>
> Behind the Scenes: Density Fields for Single View Reconstruction【13】#### Occupancy competition
> competition:The world's First 3D Occupancy Benchmark for Scene Perception in Autonomous Driving【https://github.com/CVPR2023-3D-Occupancy-Prediction/CVPR2023-3D-Occupancy-Prediction】
>
>#### Occupancy datasets
> Occ3D: A Large-Scale 3D Occupancy Prediction Benchmark for Autonomous Driving 【32】
>
> OpenOccupancy: A Large Scale Benchmark for Surrounding Semantic Occupancy Perception【15】
>
> SurroundOcc【31】
>
> SSCBench: A Large-Scale 3D Semantic Scene Completion Benchmark for Autonomous Driving【5】
>
> Scene as Occupancy【15】## Reference link
【2019-occupancy network】http://openaccess.thecvf.com/content_CVPR_2019/html/Mescheder_Occupancy_Networks_Learning_3D_Reconstruction_in_Function_Space_CVPR_2019_paper.html
【2022-MonoScene】http://openaccess.thecvf.com/content/CVPR2022/html/Cao_MonoScene_Monocular_3D_Semantic_Scene_Completion_CVPR_2022_paper.html
【2023-OccupancyM3D】https://arxiv.org/abs/2305.15694
【2023-VoxFormer】http://openaccess.thecvf.com/content/CVPR2023/html/Li_VoxFormer_Sparse_Voxel_Transformer_for_Camera-Based_3D_Semantic_Scene_Completion_CVPR_2023_paper.html
【2023-OccDepth】https://arxiv.org/abs/2302.13540
【2023-TPVFormer】http://openaccess.thecvf.com/content/CVPR2023/html/Huang_Tri-Perspective_View_for_Vision-Based_3D_Semantic_Occupancy_Prediction_CVPR_2023_paper.html
【2023-SurroundOcc】http://openaccess.thecvf.com/content/ICCV2023/html/Wei_SurroundOcc_Multi-camera_3D_Occupancy_Prediction_for_Autonomous_Driving_ICCV_2023_paper.html
【2023-OccFormer】https://arxiv.org/abs/2304.05316
【2023-OccBEV】https://arxiv.org/abs/2305.18829
【2023-PanoOcc】https://arxiv.org/abs/2306.10013
【2022-LOPR】https://arxiv.org/abs/2210.01249
【A Simple Attempt for 3D Occupancy Estimation in Autonomous Driving】https://arxiv.org/abs/2303.10076
【2023-UniOcc】https://arxiv.org/abs/2306.09117
【2023-Occ3D】https://arxiv.org/abs/2304.14365
【2023-OpenOccupancy】https://arxiv.org/abs/2303.03991
【2023-SSCBench】https://arxiv.org/abs/2306.09001
【Occupancy-MAE】https://ieeexplore.ieee.org/abstract/document/10273603/?casa_token=RKAYiCgprDgAAAAA:eq6K85lc3pij8TkcyH8UXCVn2_iy-vNZ04ywQaDt6Nbk70Chbzx8SvZ5I1MPEgyc2K4BtEXW7Att
【2023FlashOcc】https://arxiv.org/abs/2311.12058
【2023-OccupancyDETR】https://arxiv.org/abs/2309.08504
【2023-RenderOcc】https://arxiv.org/abs/2309.09502
【2023-PointOcc】https://arxiv.org/abs/2308.16896
【2023-OccNeRF】https://arxiv.org/abs/2312.09243
【2023-SelfOcc】https://arxiv.org/abs/2311.12754
【2023-Behind the Scenes】http://openaccess.thecvf.com/content/CVPR2023/html/Wimbauer_Behind_the_Scenes_Density_Fields_for_Single_View_Reconstruction_CVPR_2023_paper.html
【2023-SOccDPT】https://arxiv.org/abs/2311.11371
## Contributing
Jia Heng