Training-free 3D Point Cloud Classification
7 papers with code • 2 benchmarks • 2 datasets
Evaluation on target datasets for 3D Point Cloud Classification without any training
Most implemented papers
PointCLIP: Point Cloud Understanding by CLIP
On top of that, we design an inter-view adapter to better extract the global feature and adaptively fuse the few-shot knowledge learned from 3D into CLIP pre-trained in 2D.
PointCLIP V2: Prompting CLIP and GPT for Powerful 3D Open-world Learning
In this paper, we first collaborate CLIP and GPT to be a unified 3D open-world learner, named as PointCLIP V2, which fully unleashes their potential for zero-shot 3D classification, segmentation, and detection.
Parameter is Not All You Need: Starting from Non-Parametric Networks for 3D Point Cloud Analysis
We present a Non-parametric Network for 3D point cloud analysis, Point-NN, which consists of purely non-learnable components: farthest point sampling (FPS), k-nearest neighbors (k-NN), and pooling operations, with trigonometric functions.
CALIP: Zero-Shot Enhancement of CLIP with Parameter-free Attention
Contrastive Language-Image Pre-training (CLIP) has been shown to learn visual representations with great transferability, which achieves promising accuracy for zero-shot classification.
CLIP2Point: Transfer CLIP to Point Cloud Classification with Image-Depth Pre-training
To address this issue, we propose CLIP2Point, an image-depth pre-training method by contrastive learning to transfer CLIP to the 3D domain, and adapt it to point cloud classification.
ULIP: Learning a Unified Representation of Language, Images, and Point Clouds for 3D Understanding
Then, ULIP learns a 3D representation space aligned with the common image-text space, using a small number of automatically synthesized triplets.
ViT-Lens: Initiating Omni-Modal Exploration through 3D Insights
A well-trained lens with a ViT backbone has the potential to serve as one of these foundation models, supervising the learning of subsequent modalities.