Zero-shot 3D classification
12 papers with code • 2 benchmarks • 2 datasets
Most implemented papers
ShapeLLM: Universal 3D Object Understanding for Embodied Interaction
This paper presents ShapeLLM, the first 3D Multimodal Large Language Model (LLM) designed for embodied interaction, exploring a universal 3D object understanding with 3D point clouds and languages.
PointCLIP: Point Cloud Understanding by CLIP
On top of that, we design an inter-view adapter to better extract the global feature and adaptively fuse the few-shot knowledge learned from 3D into CLIP pre-trained in 2D.
PointCLIP V2: Prompting CLIP and GPT for Powerful 3D Open-world Learning
In this paper, we first collaborate CLIP and GPT to be a unified 3D open-world learner, named as PointCLIP V2, which fully unleashes their potential for zero-shot 3D classification, segmentation, and detection.
Uni3D: Exploring Unified 3D Representation at Scale
Scaling up representations for images or text has been extensively investigated in the past few years and has led to revolutions in learning vision and language.
CLIP2Point: Transfer CLIP to Point Cloud Classification with Image-Depth Pre-training
To address this issue, we propose CLIP2Point, an image-depth pre-training method by contrastive learning to transfer CLIP to the 3D domain, and adapt it to point cloud classification.
ULIP: Learning a Unified Representation of Language, Images, and Point Clouds for 3D Understanding
Then, ULIP learns a 3D representation space aligned with the common image-text space, using a small number of automatically synthesized triplets.
ULIP-2: Towards Scalable Multimodal Pre-training for 3D Understanding
It achieves a new SOTA of 50. 6% (top-1) on Objaverse-LVIS and 84. 7% (top-1) on ModelNet40 in zero-shot classification.
OpenShape: Scaling Up 3D Shape Representation Towards Open-World Understanding
Due to their alignment with CLIP embeddings, our learned shape representations can also be integrated with off-the-shelf CLIP-based models for various applications, such as point cloud captioning and point cloud-conditioned image generation.
Beyond First Impressions: Integrating Joint Multi-modal Cues for Comprehensive 3D Representation
Insufficient synergy neglects the idea that a robust 3D representation should align with the joint vision-language space, rather than independently aligning with each modality.
ViT-Lens: Initiating Omni-Modal Exploration through 3D Insights
A well-trained lens with a ViT backbone has the potential to serve as one of these foundation models, supervising the learning of subsequent modalities.