ULIP-2: Towards Scalable Multimodal Pre-training for 3D Understanding

Recent advancements in multimodal pre-training methods have shown promising efficacy in 3D representation learning by aligning multimodal features across 3D shapes, their 2D counterparts, and language descriptions. However, the methods used by existing multimodal pre-training frameworks to gather multimodal data for 3D applications lack scalability and comprehensiveness, potentially constraining the full potential of multimodal learning. The main bottleneck lies in the language modality's scalability and comprehensiveness. To address this, we introduce ULIP-2, a tri-modal pre-training framework that leverages state-of-the-art large multimodal models to automatically generate holistic language counterparts for 3D objects. It does not require any 3D annotations, and is therefore scalable to large datasets. We conduct experiments on two large-scale 3D datasets, Objaverse and ShapeNet, and augment them with tri-modal datasets of 3D point clouds, images, and language for training ULIP-2. ULIP-2 achieves significant improvements on downstream zero-shot classification on ModelNet40 (74.0% in top-1 accuracy); on the real-world ScanObjectNN benchmark, it obtains 91.5% in overall accuracy with only 1.4 million parameters, signifying a breakthrough in scalable multimodal 3D representation learning without human 3D annotations. The code, along with the generated tri-modal datasets, can be found at https://github.com/salesforce/ULIP.

PDF Abstract

Results from the Paper


Ranked #4 on 3D Point Cloud Classification on ScanObjectNN (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
3D Point Cloud Classification ScanObjectNN ULIP-2 + PointNeXt Overall Accuracy 91.5 # 4
Mean Accuracy 91.2 # 1
Number of params 1.4M # 51
3D Point Cloud Classification ScanObjectNN ULIP-2 + Point-BERT Overall Accuracy 89.0 # 18
3D Point Cloud Classification ScanObjectNN ULIP-2 + PointNeXt (no voting) Overall Accuracy 90.8 # 6
Mean Accuracy 90.3 # 2
Number of params 1.4M # 51

Methods


No methods listed for this paper. Add relevant methods here