Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/microsoft/sparenet
Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021)
https://github.com/microsoft/sparenet
deep-learning differentiable-renderer generative-adversarial-network neural-network point-cloud point-cloud-completion
Last synced: about 1 month ago
JSON representation
Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021)
- Host: GitHub
- URL: https://github.com/microsoft/sparenet
- Owner: microsoft
- License: mit
- Created: 2021-03-01T13:01:29.000Z (almost 4 years ago)
- Default Branch: main
- Last Pushed: 2023-04-18T21:04:59.000Z (over 1 year ago)
- Last Synced: 2024-12-04T17:49:06.876Z (about 1 month ago)
- Topics: deep-learning, differentiable-renderer, generative-adversarial-network, neural-network, point-cloud, point-cloud-completion
- Language: Python
- Homepage: https://arxiv.org/abs/2103.02535
- Size: 58.9 MB
- Stars: 145
- Watchers: 9
- Forks: 20
- Open Issues: 16
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
- Security: SECURITY.md
Awesome Lists containing this project
README
# Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021)
An efficient PyTorch library for Point Cloud Completion.
### [Project page](https://alphapav.github.io/SpareNet/) | [Paper](https://arxiv.org/abs/2103.02535) | [Video]()
[Chulin Xie*](https://github.com/AlphaPav), [Chuxin Wang*](https://chuxwa.github.io/), [Bo Zhang](https://bo-zhang.me/), [Hao Yang](https://www.microsoft.com/en-us/research/people/haya/), [Dong Chen](https://www.microsoft.com/en-us/research/people/doch/), and [Fang Wen](https://www.microsoft.com/en-us/research/people/fangwen/). (\*Equal contribution)
## Abstract
>We proposed a novel Style-based Point Generator with Adversarial Rendering (SpareNet) for point cloud completion. Firstly, we present the channel-attentive EdgeConv to fully exploit the local structures as well as the global shape in point features. Secondly, we observe that the concatenation manner used by vanilla foldings limits its potential of generating a complex and faithful shape. Enlightened by the success of StyleGAN, we regard the shape feature as style code that modulates the normalization layers during the folding, which considerably enhances its capability. Thirdly, we realize that existing point supervisions, e.g., Chamfer Distance or Earth Mover’s Distance, cannot faithfully reflect the perceptual quality of the reconstructed points. To address this, we propose to project the completed points to depth maps with a differentiable renderer and apply adversarial training to advocate the perceptual realism under different viewpoints. Comprehensive experiments on ShapeNet and KITTI prove the effectiveness of our method, which achieves state-of-the-art quantitative performance while offering superior visual quality.## Installation
1. Create a virtual environment via `conda`.
```shell
conda create -n sparenet python=3.7
conda activate sparenet
```2. Install `torch` and `torchvision`.
```shell
conda install pytorch cudatoolkit=10.1 torchvision -c pytorch
```3. Install requirements.
```shell
pip install -r requirements.txt
```4. Install cuda
```shell
sh setup_env.sh
```## Dataset
* Download [the processed ShapeNet dataset](https://gateway.infinitescript.com/?fileName=ShapeNetCompletion) (16384 points) generated by [GRNet](https://github.com/hzxie/GRNet), and the [KITTI dataset](https://drive.google.com/drive/folders/1fSu0_huWhticAlzLh3Ejpg8zxzqO1z-F).* Update the file path of the datasets in `configs/base_config.py`:
```
__C.DATASETS.shapenet.partial_points_path = "/path/to/datasets/ShapeNetCompletion/%s/partial/%s/%s/%02d.pcd"
__C.DATASETS.shapenet.complete_points_path = "/path/to/datasets/ShapeNetCompletion/%s/complete/%s/%s.pcd"
__C.DATASETS.kitti.partial_points_path = "/path/to/datasets/KITTI/cars/%s.pcd"
__C.DATASETS.kitti.bounding_box_file_path = "/path/to/datasets/KITTI/bboxes/%s.txt"# Dataset Options: ShapeNet, ShapeNetCars, KITTI
__C.DATASET.train_dataset = "ShapeNet"
__C.DATASET.test_dataset = "ShapeNet"
```## Get Started
### Inference Using Pretrained Model
The pretrained models:
- [SpareNet for ShapeNet](https://drive.google.com/file/d/15PiH-bRlSlK4AUUnVwREzuAlMVJ9TfQG)
- [PCN for ShapeNet](https://drive.google.com/drive/folders/1ruN16MlJm4OeRMd41C19HyWqYOIrNrNh)
- [GRNet for ShapeNet](https://gateway.infinitescript.com/?fileName=GRNet-ShapeNet.pth)
- [GRNet for KITTI](https://gateway.infinitescript.com/?fileName=GRNet-KITTI.pth)
- [MSN for ShapeNet](https://drive.google.com/drive/folders/14UZXKqXIZ0gL3hhrV2ySll_pH2eLGFL5) (for 8192 points)- run
```shell
python test.py --gpu ${GPUS}\
--workdir ${WORK_DIR} \
--model ${network} \
--weights ${path to checkpoint} \
--test_mode ${mode}
```- example
```shell
python test.py --gpu 0 --workdir /path/to/logfiles --model sparenet --weights /path/to/checkpoint --test_mode default
```### Train
All log files in the training process, such as log message, checkpoints, etc, will be saved to the work directory.
- run
```shell
python train.py --gpu ${GPUS}\
--workdir ${WORK_DIR} \
--model ${network} \
--weights ${path to checkpoint}
```
- example
```shell
python train.py --gpu 0,1,2,3 --workdir /path/to/logfiles --model sparenet --weights /path/to/checkpoint
```## Differentiable Renderer
A fully differentiable point renderer that enables end-to-end rendering from 3D point cloud to 2D depth maps. See the paper for details.
### Usage of Renderer
The inputs of renderer are pcd, views and radius, and the outputs of renderer are depth_maps.
- example
```shell
# `projection_mode`: a str with value "perspective" or "orthorgonal"
# `eyepos_scale`: a float that defines the distance of eyes to (0, 0, 0)
# `image_size`: an int defining the output image size
renderer = ComputeDepthMaps(projection_mode, eyepos_scale, image_size)# `data`: a tensor with shape [batch_size, num_points, 3]
# `view_id`: the index of selected view satisfying 0 <= view_id < 8
# `radius_list`: a list of floats, defining the kernel radius to render each point
depthmaps = renderer(data, view_id, radius_list)
```### Test FPD on ShapeNet Dataset
* Run your model and save your results of test dataset* Update the file path of the results in `test_fpd.py` and run it:
```
parser.add_argument('--log_dir', default='/path/to/save/logs')
parser.add_argument('--data_dir', default='/path/to/test/dataset/pcds')
parser.add_argument('--fake_dir', default='/path/to/methods/pcds',
help='/path/to/results/shapenet_fc/pcds/')
```## License
The codes and the pretrained model in this repository are under the MIT license as specified by the LICENSE file.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [[email protected]](mailto:[email protected]) with any additional questions or comments.
## BibTex
If you like our work and use the codebase or models for your research, please cite our work as follows.
```bibtex
@InProceedings{Xie_2021_CVPR,
author = {Xie, Chulin and Wang, Chuxin and Zhang, Bo and Yang, Hao and Chen, Dong and Wen, Fang},
title = {Style-Based Point Generator With Adversarial Rendering for Point Cloud Completion},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2021},
pages = {4619-4628}
}
```