Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/lyqun/FPConv
CVPR 2020, "FPConv: Learning Local Flattening for Point Convolution"
https://github.com/lyqun/FPConv
3d-convolutions 3d-vision cvpr2020 point-cloud point-convolution pytorch s3dis scannet scene-understanding semantic-segmentation
Last synced: 14 days ago
JSON representation
CVPR 2020, "FPConv: Learning Local Flattening for Point Convolution"
- Host: GitHub
- URL: https://github.com/lyqun/FPConv
- Owner: lyqun
- License: mit
- Created: 2020-03-13T10:05:24.000Z (over 4 years ago)
- Default Branch: master
- Last Pushed: 2021-11-03T15:12:49.000Z (about 3 years ago)
- Last Synced: 2024-08-01T03:45:51.974Z (3 months ago)
- Topics: 3d-convolutions, 3d-vision, cvpr2020, point-cloud, point-convolution, pytorch, s3dis, scannet, scene-understanding, semantic-segmentation
- Language: Python
- Homepage:
- Size: 1.53 MB
- Stars: 132
- Watchers: 8
- Forks: 16
- Open Issues: 5
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# FPConv
Yiqun Lin, Zizheng Yan, Haibin Huang, Dong Du, Ligang Liu, Shuguang Cui, Xiaoguang Han, "FPConv: Learning Local Flattening for Point Convolution", CVPR 2020 [[paper]](https://arxiv.org/abs/2002.10701)
```
@inproceedings{lin2020fpconv,
title={Fpconv: Learning local flattening for point convolution},
author={Lin, Yiqun and Yan, Zizheng and Huang, Haibin and Du, Dong and Liu, Ligang and Cui, Shuguang and Han, Xiaoguang},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={4293--4302},
year={2020}
}
```## Introduction
We introduce FPConv, a novel surface-style convolution operator designed for 3D point cloud analysis. Unlike previous methods, FPConv doesn't require transforming to intermediate representation like 3D grid or graph and directly works on surface geometry of point cloud. To be more specific, for each point, FPConv performs a local flattening by automatically learning a weight map to softly project surrounding points onto a 2D grid. Regular 2D convolution can thus be applied for efficient feature learning. FPConv can be easily integrated into various network architectures for tasks like 3D object classification and 3D scene segmentation, and achieve comparable performance with existing volumetric-type convolutions. More importantly, our experiments also show that FPConv can be a complementary of volumetric convolutions and jointly training them can further boost overall performance into state-of-the-art results.
![fig3](figures/fig3.jpg)
## Installation
This code has been tested with Python 3.6, PyTorch 1.2.0, CUDA 10.0 and CUDNN 7.4 on Ubuntu 18.04.
Firstly, install [pointnet2](https://github.com/sshaoshuai/Pointnet2.PyTorch) by running the following commands:
```shell
cd fpconv/pointnet2
python setup.py install
cd ../
```You may also need to install plyfile and pickle for data preprocessing.
## Usage
Edit the global configuration file `config.json` before training.
```json
{
"version": "0.0",
"scannet_raw": "/dataset/scannet_v2",
"scannet_pickle": "/dataset/scannet_pickles",
"scene_list": "/utils/scannet_datalist",
"s3dis_aligned_raw": "/dataset/Stanford3dDataset_v1.2_Aligned_Version",
"s3dis_data_root": "/dataset/s3dis_aligned"
}
```### — Semantic Segmentation on ScanNet
__0. Baseline__
Tested on eval split of ScanNet. (see `./utils/scannet_datalist/scannetv2_eval.txt`)
| Model | mIoU | mA | oA | download |
| ------------- | ---- | ---- | ---- | ------------------------------------------------------------ |
| fpcnn_scannet | 64.4 | 76.4 | 85.8 | [ckpt-17.8M](https://drive.google.com/file/d/1jR-m3bx2tGo9oV4ULdaYr-woSiel659T/view?usp=sharing) |__1. Preprocessing__
Download ScanNet v2 dataset to `./dataset/scannet_v2`. Only `_vh_clean_2.ply` and `_vh_clean_2.labels.ply` should be downloaded for each scene. Specify the dataset path and output path in `config.json` and then run following commands for data pre-processing.
```shell
cd utils
python collect_scannet_pickle.py
```It will generate 3 pickle files (`scannet__rgb21c_pointid.pickle`) for 3 splits (train, eval, test) respectively. We also provide a pre-processed ScanNet v2 dataset for downloading: [Google Drive](https://drive.google.com/drive/folders/1xz59bKaIZbf0BU3oKSTs3qyV3gRf7aDW?usp=sharing). The `./dataset` folder should be organized as follows.
```
FPConv
├── dataset
│ ├── scannet_v2
│ │ ├── scans
│ │ ├── scans_test
│ ├── scannet_pickles
│ │ ├── scannet_train_rgb21c_pointid.pickle
│ │ ├── scannet_eval_rgb21c_pointid.pickle
│ │ ├── scannet_test_rgb21c_pointid.pickle
```__2. Training__
Run the following command to start the training. Output (logs) will be redirected to `./logs/fp_scannet/nohup.log`.
```shell
bash train_scannet.sh
```We trained our model with 2 Titan Xp GPUs with batch size of 12. If you don't have enough GPUs for training, please reduce `batch_size` to 6 for single GPU.
__3. Evaluation__
Run the following command to evaluate model on evaluation dataset (you may need to modify the `epoch` in `./test_scannet.sh`). Output (logs) will be redirected to `./test/fp_scannet_240.log`.
```shell
bash test_scannet.sh
```__Note__: Final evaluation (by running `./test_scannet.sh`) is conducted on full point cloud, while evaluation during the training phase is conducted on randomly sampled points in each block of input scene.
### — Semantic Segmentation on S3DIS
__0. Baseline__
Trained on Area 1~4 and 6, tested on Area 5.
| Model | mIoU | mA | oA | download |
| ----------- | ---- | ---- | ---- | ------------------------------------------------------------ |
| fpcnn_s3dis | 62.7 | 70.3 | 87.5 | [ckpt-70.0M](https://drive.google.com/file/d/1v5FHDYPfcji3elUQJ-P6n618EZ7_2rpd/view?usp=sharing) |__1. Preprocessing__
Firstly, you need to download S3DIS dataset from [link](http://buildingparser.stanford.edu/dataset.html), aligned version 1.2 of the dataset is used in this work. Unzip S3DIS dataset into `./dataset/Stanford3dDataset_v1.2_Aligned_Version`, and specify the dataset path and output path in `config.json`. Then run the following commands for pre-processing.
```shell
cd utils
python collect_indoor3d_data.py
cd ..
```It will generate a `.npy` files for each room. The dataset folder should be organized as follows. We also provide a pre-processed S3DIS dataset for downloading: [Google Drive](https://drive.google.com/file/d/1Pdf8x-Ayz8n5YxkouU1R9nD71LEctAtq/view?usp=sharing).
```
FPConv
├── dataset
│ ├── Stanford3dDataset_v1.2_Aligned_Version
│ │ ├── Area_1
│ │ ├── ...
│ │ ├── Area_6
│ ├── s3dis_aligned
│ │ ├── *.npy
```__2. Training__
Run the following command to start the training. Output (logs) will be redirected to `./logs/fp_s3dis/nohup.log`.
```shell
bash train_s3dis.sh
```We trained our model on S3DIS with 4 Titan Xp GPUs, batch size of 8 totally, and 100 epochs.
__3. Evaluation__
Run the following command to evaluate model on evaluation dataset (you may need to modify the `epoch` in `./test_s3dis.sh`). Output (logs) will be redirected to `./test/fp_s3dis_60.log`.
```shell
bash test_s3dis.sh
```## Acknowledgement
- [sshaoshuai/Pointnet2.PyTorch](https://github.com/sshaoshuai/Pointnet2.PyTorch): PyTorch implementation of PointNet++.
- [charlesq34/pointnet](https://github.com/charlesq34/pointnet): Data pre-processing for S3DIS.
- [DylanWusee/pointconv](https://github.com/DylanWusee/pointconv): Data pre-processing for ScanNet.## License
This repository is released under MIT License (see LICENSE file for details).