Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/orendv/learning_to_sample
A learned sampling approach for point clouds (CVPR 2019)
https://github.com/orendv/learning_to_sample
cvpr2019 deep-learning geometry-processing neural-network point-cloud sampling tensorflow
Last synced: 3 months ago
JSON representation
A learned sampling approach for point clouds (CVPR 2019)
- Host: GitHub
- URL: https://github.com/orendv/learning_to_sample
- Owner: orendv
- License: other
- Created: 2018-12-04T16:35:43.000Z (about 6 years ago)
- Default Branch: master
- Last Pushed: 2024-05-05T12:13:03.000Z (9 months ago)
- Last Synced: 2024-08-01T03:45:26.636Z (6 months ago)
- Topics: cvpr2019, deep-learning, geometry-processing, neural-network, point-cloud, sampling, tensorflow
- Language: Python
- Homepage: https://arxiv.org/abs/1812.01659
- Size: 4.36 MB
- Stars: 173
- Watchers: 6
- Forks: 20
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Learning to Sample
Created by [Oren Dovrat*](https://www.linkedin.com/in/dovrat/), [Itai Lang*](https://itailang.github.io/), and [Shai Avidan](http://www.eng.tau.ac.il/~avidan/) from Tel-Aviv University.
*Equal contribution![teaser](./doc/teaser.png)
## Introduction
We propose a learned sampling approach for point clouds. Please see our [arXiv tech report](https://arxiv.org/abs/1812.01659) (or the [official CVPR 2019 version](https://openaccess.thecvf.com/content_CVPR_2019/html/Dovrat_Learning_to_Sample_CVPR_2019_paper.html)).Processing large point clouds is a challenging task. Therefore, the data is often sampled to a size that can be processed more easily. The question is how to sample the data? A popular sampling technique is Farthest Point Sampling (FPS). However, FPS is agnostic to a downstream application (classification, retrieval, etc.). The underlying assumption seems to be that minimizing the farthest point distance, as done by FPS, is a good proxy to other objective functions.
We show that it is better to learn how to sample. To do that, we propose a generative deep network to simplify 3D point clouds. The network, termed S-NET, takes a point cloud and generates a smaller point cloud that is optimized for a particular task. The simplified point cloud is not guaranteed to be a subset of the original point cloud. Therefore, we match it to a subset of the original points in a post-processing step. We contrast our approach with FPS by experimenting on two standard data sets and show significantly better results for a variety of applications.![poster](./doc/poster.png)
## Citation
If you find our work useful in your research, please consider citing:@InProceedings{dovrat2019learning_to_sample,
author = {Dovrat, Oren and Lang, Itai and Avidan, Shai},
title = {{Learning to Sample}},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
pages = {2760--2769},
year = {2019}
}## Installation and usage
This project contains two sub-directories, each is a stand-alone project with it's own instructions.
Please see `classification/README.md` and `reconstruction/README.md`.## License
This project is licensed under the terms of the MIT license (see `LICENSE` for details).## Selected projects that use "Learning to Sample"
* SampleNet: Differentiable Point Cloud Sampling by Lang *et al*. (CVPR 2020 Oral). This work extends "Learning to Sample" and proposes a novel differentiable relaxation for point cloud sampling.
* Multi-Stage Point Completion Network with Critical Set Supervision by Zhang *et al*. (submitted to CAGD; Special Issue of GMP 2020). This work evaluates our learned sampling as a supervision signal for point cloud completion network.
* MOPS-Net: A Matrix Optimization-driven Network for Task-Oriented 3D Point Cloud Downsampling by Qian *et al*. (arXiv preprint). This work suggests an alternative network architecture for learned point cloud sampling. To train their network, the authors use our proposed losses for S-NET and ProgressiveNet.