Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/pqhieu/jsis3d
[CVPR'19] JSIS3D: Joint Semantic-Instance Segmentation of 3D Point Clouds
https://github.com/pqhieu/jsis3d
cvpr deep-learning instance-segmentation point-cloud pytorch
Last synced: 3 months ago
JSON representation
[CVPR'19] JSIS3D: Joint Semantic-Instance Segmentation of 3D Point Clouds
- Host: GitHub
- URL: https://github.com/pqhieu/jsis3d
- Owner: pqhieu
- License: mit
- Created: 2019-04-06T13:58:48.000Z (almost 6 years ago)
- Default Branch: master
- Last Pushed: 2020-07-08T07:49:31.000Z (over 4 years ago)
- Last Synced: 2024-08-01T03:45:52.907Z (6 months ago)
- Topics: cvpr, deep-learning, instance-segmentation, point-cloud, pytorch
- Language: Python
- Homepage: https://pqhieu.com/research/cvpr19/
- Size: 1.49 MB
- Stars: 175
- Watchers: 7
- Forks: 36
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# JSIS3D
This is the official Pytorch implementation of the following publication.
> **JSIS3D: Joint Semantic-Instance Segmentation of 3D Point Clouds with**
> **Multi-Task Pointwise Networks and Multi-Value Conditional Random Fields**
> Quang-Hieu Pham, Duc Thanh Nguyen, Binh-Son Hua, Gemma Roig, Sai-Kit
> Yeung
*Conference on Computer Vision and Pattern Recognition (CVPR),
> 2019* (**Oral**)
> [Paper](https://arxiv.org/abs/1904.00699) |
> [Homepage](https://pqhieu.github.io/research/cvpr19/)### Citation
If you find our work useful for your research, please consider citing:@inproceedings{pham-jsis3d-cvpr19,
title = {{JSIS3D}: Joint semantic-instance segmentation of 3d point clouds with multi-task pointwise networks and multi-value conditional random fields},
author = {Pham, Quang-Hieu and Nguyen, Duc Thanh and Hua, Binh-Son and Roig, Gemma and Yeung, Sai-Kit},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2019}
}## Usage
### Prerequisites
This code is tested in Manjaro Linux with CUDA 10.0 and Pytorch 1.0.- Python 3.5+
- Pytorch 0.4.0+### Installation
To use MV-CRF (optional), you first need to compile the code:cd external/densecrf
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=Release ..
make
cd ../../.. # You should be at the root folder here
make### Dataset
We have preprocessed the S3DIS dataset ([2.5GB](https://drive.google.com/open?id=1s1cFfb8cInM-SNHQoTGxN9BIyNpNQK6x))
in HDF5 format. After downloading the files, put them into the corresponding
`data/s3dis/h5` folder.### Training & Evaluation
To train a model on S3DIS dataset:python train.py --config configs/s3dis.json --logdir logs/s3dis
Log files and network parameters will be saved to the `logs/s3dis` folder.
After training, we can use the model to predict semantic-instance segmentation
labels as follows:python pred.py --logdir logs/s3dis --mvcrf
To evaluate the results, run the following command:
python eval.py --logdir logs/s3dis
For more details, you can use the `--help` option for every scripts.
> **Note**: The results on S3DIS in our paper are tested on Area 6 instead of Area 5.
> To reproduce the results, please change the split in `train.txt` and `test.txt` accordingly.
> Here I chose to keep the test set on Area 5 to make it easier to compare with other methods.### Prepare your own dataset
Check out the `scripts` folder to see how we prepare the dataset for training.## License
Our code is released under MIT license (see LICENSE for more details).**Contact**: Quang-Hieu Pham ([email protected])