-
19/08/2023: As our CVPR23 challenge has finished (congratulations to Cattalyya Nuengsikapian!), our test set has now been made public. Dataloaders have been updated in consequence: using the "
EvalLoader
" classes is not necessary anymore 😊 -
18/06/2023: The 3DCoMPaT++ CVPR23 challenge has been concluded. We would like to congratulate Cattalyya Nuengsikapian, winner of both coarse and fine-grained tracks for her excellent performance in our challenge 🎉
3DCoMPaT++ is a multimodal 2D/3D dataset of 16 million rendered views of more than 10 million stylized 3D shapes carefully annotated at part-instance level, alongside matching RGB pointclouds, 3D textured meshes, depth maps and segmentation masks. This work builds upon 3DCoMPaT, the first version of this dataset.
We plan to further extend the dataset: stay tuned! 🔥
To explore our dataset, please check out our integrated web browser:
For more information about the shape browser, please check out our dedicated Wiki page.
To get started straight away, here is a Jupyter notebook (no downloads required, just run and play!):
For a deeper dive into our dataset, please check our online documentation:
We provide baseline models for 2D and 3D tasks, following the structure below:
- 2D Experiments
- 2D Shape Classifier: ResNet50
- 2D Part and Material Segmentation: SegFormer
- 3D Experiments
- 3D Shape classification: DGCNN - PCT - PointNet++ - PointStack - Curvenet - PointNext - PointMLP
- 3D Part segmentation: PCT - PointNet++ - PointStack - Curvenet - PointNeXT
As a part of the C3DV CVPR 2023 workshop, we are organizing a modelling challenge based on 3DCoMPaT++. To learn more about the challenge, check out this link:
⚙️ For computer time, this research used the resources of the Supercomputing Laboratory at King Abdullah University of Science & Technology (KAUST). We extend our sincere gratitude to the KAUST HPC Team for their invaluable assistance and support during the course of this research project. Their expertise and dedication continues to play a crucial role in the success of our work.
💾 We also thank the Amazon Open Data program for providing us with free storage of our large-scale data on their servers. Their generosity and commitment to making research data widely accessible have greatly facilitated our research efforts.
If you use our dataset, please cite the two following references:
@article{slim2023_3dcompatplus,
title={3DCoMPaT++: An improved Large-scale 3D Vision Dataset
for Compositional Recognition},
author={Habib Slim, Xiang Li, Yuchen Li,
Mahmoud Ahmed, Mohamed Ayman, Ujjwal Upadhyay
Ahmed Abdelreheem, Arpit Prajapati,
Suhail Pothigara, Peter Wonka, Mohamed Elhoseiny},
year={2023}
}
@article{li2022_3dcompat,
title={3D CoMPaT: Composition of Materials on Parts of 3D Things},
author={Yuchen Li, Ujjwal Upadhyay, Habib Slim,
Ahmed Abdelreheem, Arpit Prajapati,
Suhail Pothigara, Peter Wonka, Mohamed Elhoseiny},
journal = {ECCV},
year={2022}
}
This repository is owned and maintained by Habib Slim, Xiang Li, Mahmoud Ahmed and Mohamed Ayman, from the Vision-CAIR group.
- [Li et al., 2022] - 3DCoMPaT: Composition of Materials on Parts of 3D Things.
- [Xie et al., 2021] - SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers.
- [He et al., 2015] - Deep Residual Learning for Image Recognition.