Towards Large-scale 3D Representation Learning with Multi-dataset Point Prompt Training

The rapid advancement of deep learning models often attributes to their ability to leverage massive training data. In contrast, such privilege has not yet fully benefited 3D deep learning, mainly due to the limited availability of large-scale 3D datasets. Merging multiple available data sources and letting them collaboratively train a single model is a potential solution. However, due to the large domain gap between 3D point cloud datasets, such mixed supervision could adversely affect the model's performance and lead to degenerated performance (i.e., negative transfer) compared to single-dataset training. In view of this challenge, we introduce Point Prompt Training (PPT), a novel framework for multi-dataset synergistic learning in the context of 3D representation learning that supports multiple pre-training paradigms. Based on this framework, we propose Prompt-driven Normalization, which adapts the model to different datasets with domain-specific prompts and Language-guided Categorical Alignment that decently unifies the multiple-dataset label spaces by leveraging the relationship between label text. Extensive experiments verify that PPT can overcome the negative transfer associated with synergistic learning and produce generalizable representations. Notably, it achieves state-of-the-art performance on each dataset using a single weight-shared model with supervised multi-dataset training. Moreover, when served as a pre-training framework, it outperforms other pre-training approaches regarding representation quality and attains remarkable state-of-the-art performance across over ten diverse downstream tasks spanning both indoor and outdoor 3D scenarios.

PDF Abstract CVPR 2024 PDF CVPR 2024 Abstract

Results from the Paper


Ranked #3 on 3D Semantic Segmentation on SemanticKITTI (val mIoU metric, using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
LIDAR Semantic Segmentation nuScenes PPT+SparseUNet val mIoU 0.786 # 7
Semantic Segmentation S3DIS PPT + SparseUNet Mean IoU 78.1 # 5
mAcc 85.4 # 7
oAcc 92.2 # 4
Number of params N/A # 1
Semantic Segmentation S3DIS Area5 PPT + SparseUNet mIoU 72.7 # 9
oAcc 91.5 # 9
mAcc 78.2 # 9
Number of params N/A # 2
Semantic Segmentation ScanNet PPT + SparseUNet test mIoU 76.6 # 5
val mIoU 76.4 # 6
3D Semantic Segmentation ScanNet200 PPT+SparseUNet val mIoU 31.9 # 6
test mIoU 33.2 # 4
3D Semantic Segmentation SemanticKITTI PPT+SparseUNet val mIoU 71.4% # 3

Methods


No methods listed for this paper. Add relevant methods here