3D Object Retrieval
9 papers with code • 2 benchmarks • 2 datasets
Source: He et al
Latest papers with no code
Diffusion Handles: Enabling 3D Edits for Diffusion Models by Lifting Activations to 3D
Our key insight is to lift diffusion activations for an object to 3D using a proxy depth, 3D-transform the depth and associated activations, and project them back to image space.
SCA-PVNet: Self-and-Cross Attention Based Aggregation of Point Cloud and Multi-View for 3D Object Retrieval
With deep features extracted from point clouds and multi-view images, we design two types of feature aggregation modules, namely the In-Modality Aggregation Module (IMAM) and the Cross-Modality Aggregation Module (CMAM), for effective feature fusion.
SketchANIMAR: Sketch-based 3D Animal Fine-Grained Retrieval
To this end, we introduce a novel SHREC challenge track that focuses on retrieving relevant 3D animal models from a dataset using sketch queries and expedites accessing 3D models through available sketches.
TextANIMAR: Text-based 3D Animal Fine-Grained Retrieval
Unlike previous SHREC challenge tracks, the proposed task is considerably more challenging, requiring participants to develop innovative approaches to tackle the problem of text-based retrieval.
SHREC'22 Track: Sketch-Based 3D Shape Retrieval in the Wild
We define two SBSR tasks and construct two benchmarks consisting of more than 46, 000 CAD models, 1, 700 realistic models, and 145, 000 sketches in total.
LATFormer: Locality-Aware Point-View Fusion Transformer for 3D Shape Recognition
To investigate this, we propose a novel Locality-Aware Point-View Fusion Transformer (LATFormer) for 3D shape retrieval and classification.
Gram Regularization for Multi-view 3D Shape Retrieval
To make up the gap, in this paper, we propose a novel regularization term called Gram regularization which reinforces the learning ability of the network by encouraging the weight kernels to extract different information on the corresponding feature map.
Method for the generation of depth images for view-based shape retrieval of 3D CAD model from partial point cloud
In this paper, we propose a method of viewpoint and image resolution estimation method for view-based 3D shape retrieval from point cloud query.
Multiple Discrimination and Pairwise CNN for View-based 3D Object Retrieval
However, most existing networks do not take into account the impact of multi-view image selection on network training, and the use of contrastive loss alone only forcing the same-class samples to be as close as possible.
Geometric Disentanglement for Generative Latent Shape Models
Representing 3D shape is a fundamental problem in artificial intelligence, which has numerous applications within computer vision and graphics.