Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/IdanAchituve/DefRec_and_PCM
Self-Supervised Learning for Domain Adaptation on Point-Clouds
https://github.com/IdanAchituve/DefRec_and_PCM
domain-adaptation mixup-inference point-clouds self-supervised-learning
Last synced: 2 months ago
JSON representation
Self-Supervised Learning for Domain Adaptation on Point-Clouds
- Host: GitHub
- URL: https://github.com/IdanAchituve/DefRec_and_PCM
- Owner: IdanAchituve
- Created: 2020-04-30T20:33:35.000Z (over 4 years ago)
- Default Branch: master
- Last Pushed: 2022-12-27T15:44:54.000Z (almost 2 years ago)
- Last Synced: 2024-06-29T05:32:59.705Z (3 months ago)
- Topics: domain-adaptation, mixup-inference, point-clouds, self-supervised-learning
- Language: Python
- Homepage:
- Size: 886 KB
- Stars: 87
- Watchers: 4
- Forks: 15
- Open Issues: 13
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Self-Supervised Learning for Domain Adaptation on Point-Clouds
### Introduction
Self-supervised learning (SSL) allows to learn useful representations from unlabeled data and has been applied effectively for domain adaptation (DA) on images. It is still unknown if and how it can be leveraged for domain adaptation for 3D perception. Here we describe the first study of SSL for DA on point clouds. We introduce a new family of pretext tasks, Deformation Reconstruction, motivated by the deformations encountered in sim-to-real transformations. The key idea is to deform regions of the input shape and use a neural network to reconstruct them. We design three types of shape deformation methods: (1) Volume-based: shape deformation based on proximity in the input space; (2) Feature-based: deforming regions in the shape that are semantically similar; and (3) Sampling-based: shape deformation based on three simple sampling schemes. As a separate contribution, we also develop a new method based on the Mixup training procedure for point-clouds. Evaluations on six domain adaptations across synthetic and real furniture data, demonstrate large improvement over previous work.[[Paper]](https://arxiv.org/pdf/2003.12641.pdf)
### Instructions
Clone repo and install it
```bash
git clone https://github.com/IdanAchituve/DefRec_and_PCM.git
cd DefRec_and_PCM
pip install -e .
```Download data:
```bash
cd ./xxx/data
python download.py
```
Where xxx is the dataset (either PointDA or PointSegDA)### Citation
Please cite this paper if you want to use it in your work,
```
@inproceedings{achituve2021self,
title={Self-Supervised Learning for Domain Adaptation on Point Clouds},
author={Achituve, Idan and Maron, Haggai and Chechik, Gal},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
pages={123--133},
year={2021}
}
```
### PointSegDA dataset
### Shape Reconstruction
### Acknowledgement
Some of the code in this repoistory was taken (and modified according to needs) from the follwing sources:
[[PointNet]](https://github.com/charlesq34/pointnet), [[PointNet++]](https://github.com/charlesq34/pointnet2), [[DGCNN]](https://github.com/WangYueFt/dgcnn), [[PointDAN]](https://github.com/canqin001/PointDAN), [[Reconstructing_space]](http://papers.nips.cc/paper/9455-self-supervised-deep-learning-on-point-clouds-by-reconstructing-space), [[Mixup]](https://github.com/facebookresearch/mixup-cifar10)