DeepShape: Deep Learned Shape Descriptor for 3D Shape Matching and Retrieval

CVPR 2015  ·  Jin Xie, Yi Fang, Fan Zhu, Edward Wong ·

Complex geometric structural variations of 3D models usually pose great challenges in 3D shape matching and retrieval. In this paper, we propose a high-level shape feature learning scheme to extract deformation-insensitive feature via a novel discriminative deep auto-encoder. First, we developed a multiscale shape distribution to concisely describe the entire shape of a 3D object. Then, by imposing the Fisher discrimination criterion on the neurons in the hidden layer, we developed a novel discriminative deep auto-encoder for shape feature learning. Finally, the neurons in hidden layers from multiple discriminative auto-encoders are concatenated to form a shape descriptor for 3D shape matching and retrieval. The proposed method is evaluated on the representative datasets with large geometric variations, i.e., Mcgill, SHREC'10 ShapeGoogle datasets. Experimental results on the benchmark datasets demonstrate the effectiveness of the proposed method on the applications of 3D shape matching and retrieval.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here