Few-Shot 3D Point Cloud Classification

25 papers with code • 8 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Few-Shot 3D Point Cloud Classification models and implementations

Datasets


Most implemented papers

Attention Is All You Need

tensorflow/tensor2tensor NeurIPS 2017

The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration.

PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space

yanx27/Pointnet_Pointnet2_pytorch NeurIPS 2017

By exploiting metric space distances, our network is able to learn local features with increasing contextual scales.

Dynamic Graph CNN for Learning on Point Clouds

WangYueFt/dgcnn 24 Jan 2018

Point clouds provide a flexible geometric representation suitable for countless applications in computer graphics; they also comprise the raw output of most 3D data acquisition devices.

PointCNN: Convolution On $\mathcal{X}$-Transformed Points

yangyanli/PointCNN NeurIPS 2018

The proposed method is a generalization of typical CNNs to feature learning from point clouds, thus we call it PointCNN.

Masked Autoencoders for Point Cloud Self-supervised Learning

Pang-Yatian/Point-MAE 13 Mar 2022

Then, a standard Transformer based autoencoder, with an asymmetric design and a shifting mask tokens operation, learns high-level latent features from unmasked point patches, aiming to reconstruct the masked point patches.

Point-M2AE: Multi-scale Masked Autoencoders for Hierarchical Point Cloud Pre-training

zrrskywalker/point-m2ae 28 May 2022

By fine-tuning on downstream tasks, Point-M2AE achieves 86. 43% accuracy on ScanObjectNN, +3. 36% to the second-best, and largely benefits the few-shot classification, part segmentation and 3D object detection with the hierarchical pre-training scheme.

Autoencoders as Cross-Modal Teachers: Can Pretrained 2D Image Transformers Help 3D Representation Learning?

runpeidong/act 16 Dec 2022

The success of deep learning heavily relies on large-scale data with comprehensive labels, which is more expensive and time-consuming to fetch in 3D compared to 2D images or natural languages.

Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining

qizekun/ReCon 5 Feb 2023

This motivates us to learn 3D representations by sharing the merits of both paradigms, which is non-trivial due to the pattern difference between the two paradigms.

Instance-aware Dynamic Prompt Tuning for Pre-trained Point Cloud Models

zyh16143998882/iccv23-idpt ICCV 2023

To conquer this limitation, we propose a novel Instance-aware Dynamic Prompt Tuning (IDPT) strategy for pre-trained point cloud models.