Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/V2AI/EFG
An Efficient, Flexible, and General deep learning framework that retains minimal.
https://github.com/V2AI/EFG
3d-object-detection 3d-scene-understanding detr nuscenes object-detection pytorch waymo-open-dataset
Last synced: 18 days ago
JSON representation
An Efficient, Flexible, and General deep learning framework that retains minimal.
- Host: GitHub
- URL: https://github.com/V2AI/EFG
- Owner: V2AI
- License: apache-2.0
- Created: 2022-12-13T13:44:15.000Z (almost 2 years ago)
- Default Branch: master
- Last Pushed: 2023-12-25T13:16:24.000Z (11 months ago)
- Last Synced: 2024-08-01T03:41:13.231Z (4 months ago)
- Topics: 3d-object-detection, 3d-scene-understanding, detr, nuscenes, object-detection, pytorch, waymo-open-dataset
- Language: Python
- Homepage: https://efg.readthedocs.io
- Size: 3.54 MB
- Stars: 105
- Watchers: 4
- Forks: 11
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
README
An Efficient, Flexible, and General deep learning framework that retains minimal. Users can use EFG to explore any research topics following project templates.
# What's New
* 2023.08.22 Code release of ICCV2023 paper: [TrajectoryFormer: 3D Object Tracking Transformer with Predictive Trajectory Hypotheses](https://github.com/poodarchu/EFG/blob/master/playground/tracking.3d/waymo/trajectoryformer/README.md).
* 2023.04.13 Support COCO Panoptic Segmentation with Mask2Former.
* 2023.03.30 Support Pytorch 2.0.
* 2023.03.21 Code release of CVPR2023 **Highlight** paper: [ConQueR: Query Contrast Voxel-DETR for 3D Object Detection](https://github.com/poodarchu/EFG/blob/master/playground/detection.3d/waymo/conquer/README.md).
* 2023.03.21 Code release of EFG codebase, with support for 2D object detection (MS COCO dataset) and 3D object detection (Waymo and nuScenes dataset).# 0. Benchmarking
# 1. Installation
## 1.1 Prerequisites
* gcc 5 (c++11 or newer)
* python >= 3.6
* cuda >= 10.1
* pytorch >= 1.6```shell
# spconv
spconv_cu11{X} (set X according to your cuda version)# waymo_open_dataset
## python 3.6
waymo-open-dataset-tf-2-1-0==1.2.0## python 3.7, 3.8
waymo-open-dataset-tf-2-4-0==1.3.1```
## 1.2 Build from source```shell
git clone https://github.com/poodarchu/EFG.git
cd EFG
pip install -v -e .
# set logging path to save model checkpoints, training logs, etc.
echo "export EFG_CACHE_DIR=/path/to/your/logs/dir" >> ~/.bashrc
```# 2. Data
## 2.1 Data Preparation - Waymo
```shell# download waymo dataset v1.2.0 (or v1.3.2, etc)
gsutil -m cp -r \
"gs://waymo_open_dataset_v_1_2_0_individual_files/testing" \
"gs://waymo_open_dataset_v_1_2_0_individual_files/training" \
"gs://waymo_open_dataset_v_1_2_0_individual_files/validation" \
.# extract frames from tfrecord to pkl
CUDA_VISIBLE_DEVICES=-1 python cli/data_preparation/waymo/waymo_converter.py --record_path "/path/to/waymo/training/*.tfrecord" --root_path "/path/to/waymo/train/"
CUDA_VISIBLE_DEVICES=-1 python cli/data_preparation/waymo/waymo_converter.py --record_path "/path/to/waymo/validation/*.tfrecord" --root_path "/path/to/waymo/val/"# create softlink to datasets
cd /path/to/EFG/datasets; ln -s /path/to/waymo/dataset/root waymo; cd ..
# create data summary and gt database from extracted frames
python cli/data_preparation/waymo/create_data.py --root-path datasets/waymo --split train --nsweeps 1
python cli/data_preparation/waymo/create_data.py --root-path datasets/waymo --split val --nsweeps 1```
## 2.2 Data Preparation - nuScenes
```
# nuScenesdataset/nuscenes
├── can_bus
├── lidarseg
├── maps
├── occupancy
│ ├── annotations.json
│ └── gts
├── panoptic
├── samples
├── sweeps
├── v1.0-mini
├── v1.0-test
└── v1.0-trainval
``````shell
# create softlink to datasets
cd /path/to/EFG/datasets; ln -s /path/to/nuscenes/dataset/root nuscenes; cd ..
# suppose that here we use nuScenes/samples images, put gts and annotations.json under nuScenes/occupancy
python cli/data_preparation/nuscenes/create_data.py --root-path datasets/nuscenes --version v1.0-trainval --nsweeps 11 --occ --seg
```# 3. Get Started
## 3.1 Training & Evaluation```shell
# cd playground/path/to/experiment/directoryefg_run --num-gpus x # default 1
efg_run --num-gpus x task [train | val | test]
efg_run --num-gpus x --resume
efg_run --num-gpus x dataloader.num_workers 0 # dynamically change options in config.yaml
```
Models will be evaluated automatically at the end of training. Or,
```shell
efg_run --num-gpus x task val
```# 4. Model ZOO
All models are trained and evaluated on 8 x NVIDIA A100 GPUs.
## Waymo Open Dataset - 3D Object Detection (val - mAPH/L2)
| Methods | Frames | Schedule | VEHICLE | PEDESTRIAN | CYCLIST |
| :-----------: | :----: | :------: | :-------: | :--------: | :-------: |
| CenterPoint | 1 | 36 | 66.9/66.4 | 68.2/62.9 | 69.0/67.9 |
| CenterPoint | 4 | 36 | 70.0/69.5 | 72.8/69.7 | 72.6/71.8 |
| Voxel-DETR | 1 | 6 | 67.6/67.1 | 69.5/63.0 | 69.0/67.8 |
| ConQueR | 1 | 6 | 68.7/68.2 | 70.9/64.7 | 71.4/70.1 |## nuScenes - 3D Object Detection (val)
| Methods | Schedule | mAP | NDS | Logs |
| :-----------: | :------: | :--: | :--: | :--: |
| CenterPoint | 20 | 59.0 | 66.7 | |# 5. Call for contributions
EFG is currently in a relatively preliminary stage, and we still have a lot of work to do, if you are interested in contributing, you can email me at [email protected].# 6. Citation
```shell@article{chen2023trajectoryformer,
title={TrajectoryFormer: 3D Object Tracking Transformer with Predictive Trajectory Hypotheses},
author={Chen, Xuesong and Shi, Shaoshuai and Zhang, Chao and Zhu, Benjin and Wang, Qiang and Cheung, Ka Chun and See, Simon and Li, Hongsheng},
journal={arXiv preprint arXiv:2306.05888},
year={2023}
}@inproceedings{zhu2023conquer,
title={Conquer: Query contrast voxel-detr for 3d object detection},
author={Zhu, Benjin and Wang, Zhe and Shi, Shaoshuai and Xu, Hang and Hong, Lanqing and Li, Hongsheng},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={9296--9305},
year={2023}
}@misc{zhu2023efg,
title={EFG: An Efficient, Flexible, and General deep learning framework that retains minimal},
author={EFG Contributors},
howpublished = {\url{https://github.com/poodarchu/efg}},
year={2023}
}```