https://github.com/pvnieo/vader
Pytorch code for "Generalizable Local Feature Pre-training for Deformable Shape Analysis" - CVPR 2023
https://github.com/pvnieo/vader
pytorch
Last synced: about 2 months ago
JSON representation
Pytorch code for "Generalizable Local Feature Pre-training for Deformable Shape Analysis" - CVPR 2023
- Host: GitHub
- URL: https://github.com/pvnieo/vader
- Owner: pvnieo
- License: gpl-2.0
- Created: 2023-03-01T06:36:47.000Z (over 2 years ago)
- Default Branch: master
- Last Pushed: 2024-04-28T17:21:03.000Z (about 1 year ago)
- Last Synced: 2025-04-13T04:51:46.016Z (about 2 months ago)
- Topics: pytorch
- Language: Python
- Homepage: https://arxiv.org/abs/2303.15104
- Size: 65.4 MB
- Stars: 9
- Watchers: 6
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# :space_invader: VADER :space_invader:
[](https://arxiv.org/abs/2303.15104)Pytorch code for "Generalizable Local Feature Pre-training for Deformable Shape Analysis" - CVPR 2023
> You underestimate the power of the local side!
---
## :construction_worker: Installation
- Install Dependencies: This implementation requires Python 3.7 or newer. Install dependencies using pip:
```bash
pip install -r requirements.txt
```- Install DiffVoxel: Navigate to the `diffvoxel` folder and execute:
```bash
python setup.py bdist_wheel
pip install --upgrade dist/diffvoxel-0.0.1-*.whl
```
- Install PointNet2: Navigate to the `Pointnet2_PyTorch/pointnet2_ops_lib` folder and execute:
```bash
python setup.py bdist_wheel
pip install --upgrade dist/pointnet2_ops-3.0.0-*.whl (or the version you have)
```## :book: Usage
In this repository, we provide the code for pre-training our network to learn local features that can generalizable across different shapes categories, As well as the code for extracting the VADER features used in downstream tasks.
Our paper presents new insights into the transferability of features from networks trained on non-deformable shapes. Once the network is pretrained (we provide pretrained weights), VADER features can be extracted and used as replacements for traditional input features (like XYZ or HKS) in any downstream task.
For all experiments, we adapted the code from [Diffusion-Net](https://github.com/nmwsharp/diffusion-net/tree/master/experiments/functional_correspondence), by substituting their input features with our VADER features. Visit their repository for detailed usage instructions.
- **Architecture Code**: Located in the UPDesc folder.
- **Pretrained Models**: Two models pretrained on the 3DMatch dataset are provided in the `UPDesc/demo/trained_models` folder, one using supervised NCE loss and the other using unsupervised cycle loss.
- **Extracting VADER Features**: Use the `extract_vader.py` script in `UPDesc/demo/` as follows:```bash
python3 extract_vader.py --model UPDescUniScale --ckpt ./trained_models/name_of_pretrained_model/weights.ckpt --hparams ./trained_models/name_of_pretrained_model/hparams.yaml --data_root ./path/to/data --scale 6.0 --out_root ./path/to/save
```
where the scale parameter is the scale by which the receptive field of the network is multiplied. This can either be found using the optimization method using the MMD loss as described in the paper, or empirically (we found that scales between 5 and 6.5 works better for area normalized human shapes, and scales between 4 and 6 works better for L2 normalized RNA shapes).## :chart_with_upwards_trend: Results
If you wish to report our result, we have summarized them below. Our method is referred to as **VADER**. `X on Y` indicates that the method was trained on dataset `X` and tested on dataset `Y`.- **Near Isometric Shape Matching**: We provide results on the FAUST (F), Scape (S) and Shrec (SH) datasets. We used the remeshed version of the datasets. We report the mean geodesic error, following the protocol used in all deep functional map papers. Our method is **unsupervised**.
| Method | F on F | S on S | F on S | S on F | F on SH | S on SH |
| --- | --- | --- | --- | --- | --- | --- |
| **VADER** | 3.9 | 4.2 | 4.1 | 3.9 | 6.4 | 6.9 |- **Molecular Surface Segmentation**: We provide results on the RNA molecules dataset. We report the mean Accuracy, following the same protocol as in the original paper. Our method is **supervised**. We provide the results for training on the full dataset, on only 50 shapes, and on only 100 shapes.
| Method | Full Dataset | 50 Shapes | 100 Shapes |
| --- | --- | --- | --- |
| **VADER** | 92.6 ± 0.02% | 83.2 ± 0.20% | 86.8 ± 0.09% |- **Partial Animal Matching**: We provide results on the SHREC16’
Cuts dataset. We report the mean geodesic error, following the same protocol as in all the deep functional maps papers. Our method is **supervised**.| Method | SHREC16’ Cuts dataset |
| --- | --- |
| **VADER** | 3.7 |## :mortar_board: Citation
If you find this work useful in your research, please consider citing:
```bibtex
@inproceedings{attaiki2023vader,
title={Generalizable Local Feature Pre-training for Deformable Shape Analysis},
author={Souhaib Attaiki and Lei Li and Maks Ovsjanikov},
booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2023}
}
```