Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/vision-cair/3dcompat-v2

3DCoMPaT++: An improved large-scale 3D vision dataset for compositional recognition
https://github.com/vision-cair/3dcompat-v2

3d compositional-learning computer-vision deep-learning multimodal-deep-learning

Last synced: about 5 hours ago
JSON representation

3DCoMPaT++: An improved large-scale 3D vision dataset for compositional recognition

Awesome Lists containing this project

README

        








3DCoMPaT++: An improved Large-scale 3D Vision Dataset for Compositional Recognition

[![PAper](https://img.shields.io/badge/Paper-red?logo=arxiv&logoWidth=15)](https://arxiv.org/abs/2310.18511)
[![Jupyter Quickstart](https://img.shields.io/badge/Quickstart-orange?logo=google-colab&logoWidth=15)](https://colab.research.google.com/drive/1OpgYL_cxekAqZF8B8zuQZkPQxUIxzV0K?usp=sharing)
[![Documentation](https://img.shields.io/badge/📚%20Documentation-blue?logoColor=white&logoWidth=20)](https://3dcompat-dataset.org/doc/)
[![Download](https://img.shields.io/badge/📦%20Download-grey?logoColor=white&logoWidth=20)](https://3dcompat-dataset.org/doc/dl-dataset.html)
[![Website](https://img.shields.io/badge/🌐%20Website-green?logoColor=white&logoWidth=20)](https://3dcompat-dataset.org/)
[![Workshop](https://img.shields.io/badge/🔨%20Workshop-purple?logoColor=white&logoWidth=20)](https://3dcompat-dataset.org/workshop/)
[![Challenge](https://img.shields.io/badge/🏆%20Challenge-critical?logoColor=white&logoWidth=20)](https://eval.ai/web/challenges/challenge-page/2031)

## 📰 News

- **19/08/2023**: As our CVPR23 challenge has finished (congratulations to [Cattalyya Nuengsikapian](https://3dcompat-dataset.org/workshop/#main-section)!), our test set has now been made public. Dataloaders have been updated in consequence: using the "`EvalLoader`" classes is not necessary anymore 😊

- **18/06/2023**: The 3DCoMPaT++ CVPR23 challenge has been concluded. We would like to congratulate [Cattalyya Nuengsikapian](https://3dcompat-dataset.org/workshop/#main-section), winner of both **coarse** and **fine-grained** tracks for her excellent performance in our challenge 🎉

## Summary

- [Introduction](#📚-introduction)
- [Getting started](#🚀-getting-started)
- [Baselines](#📊-baselines)
- [Challenge](#🏆-challenge)
- [Acknowledgments](#🙏-acknowledgments)
- [Citation](#citation)


![3DCoMPaT models view](img/header_gif.gif)


## 📚 Introduction

3DCoMPaT++ is a multimodal 2D/3D dataset of 16 million rendered views of more than 10 million stylized 3D shapes carefully annotated at **part-instance** level, alongside matching **RGB pointclouds**, **3D textured meshes**, **depth maps** and **segmentation masks**. This work builds upon [3DCoMPaT](https://3dcompat-dataset.org/), the first version of this dataset.

**We plan to further extend the dataset: stay tuned!** 🔥


## 🔍 Browser

To explore our dataset, please check out our integrated web browser:



3DCoMPaT Browser


For more information about the shape browser, please check out [our dedicated Wiki page](https://3dcompat-dataset.org/doc/browser.html).


## 🚀 Getting started

To get started straight away, here is a Jupyter notebook (no downloads required, just **run and play**!):

[![Jupyter Quickstart](https://img.shields.io/badge/Quickstart-orange?logo=google-colab&logoWidth=15)](https://colab.research.google.com/drive/1OpgYL_cxekAqZF8B8zuQZkPQxUIxzV0K?usp=sharing)

For a deeper dive into our dataset, please check our online documentation:

[![Documentation](https://img.shields.io/badge/📚%20Documentation-blue?logoColor=white)](https://3dcompat-dataset.org/doc/)


## 📊 Baselines

We provide baseline models for 2D and 3D tasks, following the structure below:

- **2D Experiments**
- [2D Shape Classifier](./models/2D/shape_classifier/): ResNet50
- [2D Part and Material Segmentation](./models/2D/segmentation/): SegFormer
- **3D Experiments**
- [3D Shape classification](./models/3D/): DGCNN - PCT - PointNet++ - PointStack - Curvenet - PointNext - PointMLP
- [3D Part segmentation](./models/3D/): PCT - PointNet++ - PointStack - Curvenet - PointNeXT


## 🏆 Challenge

As a part of the [C3DV CVPR 2023 workshop](https://3dcompat-dataset.org/workshop/), we are organizing a modelling challenge based on 3DCoMPaT++.
To learn more about the challenge, check out this link:

[![Challenge](https://img.shields.io/badge/🏆%20Challenge-critical?logoColor=white&logoWidth=20)](https://eval.ai/web/challenges/challenge-page/2031)


## 🙏 Acknowledgments

⚙️ For computer time, this research used the resources of the Supercomputing Laboratory at [King Abdullah University of Science & Technology (KAUST)](https://www.kaust.edu.sa/).
We extend our sincere gratitude to the [KAUST HPC Team](www.hpc.kaust.edu.sa) for their invaluable assistance and support during the course of this research project. Their expertise and dedication continues to play a crucial role in the success of our work.

💾 We also thank the [Amazon Open Data](https://aws.amazon.com/opendata) program for providing us with free storage of our large-scale data on their servers. Their generosity and commitment to making research data widely accessible have greatly facilitated our research efforts.

## Citation

If you use our dataset, please cite the two following references:

```bibtex
@article{slim2023_3dcompatplus,
title={3DCoMPaT++: An improved Large-scale 3D Vision Dataset
for Compositional Recognition},
author={Habib Slim, Xiang Li, Yuchen Li,
Mahmoud Ahmed, Mohamed Ayman, Ujjwal Upadhyay
Ahmed Abdelreheem, Arpit Prajapati,
Suhail Pothigara, Peter Wonka, Mohamed Elhoseiny},
year={2023}
}
```

```bibtex
@article{li2022_3dcompat,
title={3D CoMPaT: Composition of Materials on Parts of 3D Things},
author={Yuchen Li, Ujjwal Upadhyay, Habib Slim,
Ahmed Abdelreheem, Arpit Prajapati,
Suhail Pothigara, Peter Wonka, Mohamed Elhoseiny},
journal = {ECCV},
year={2022}
}
```

This repository is owned and maintained by Habib Slim, Xiang Li, Mahmoud Ahmed and Mohamed Ayman, from the Vision-CAIR group.

## References

1. _[Li et al., 2022]_ - 3DCoMPaT: Composition of Materials on Parts of 3D Things.
2. _[Xie et al., 2021]_ - SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers.
3. _[He et al., 2015]_ - Deep Residual Learning for Image Recognition.