3D Shape Representation
39 papers with code • 0 benchmarks • 4 datasets
Image: MeshNet
Benchmarks
These leaderboards are used to track progress in 3D Shape Representation
Latest papers with no code
Adaptive Wavelet Transformer Network for 3D Shape Representation Learning
We present a novel method for 3D shape representation learning using multi-scale wavelet decomposition.
HybridSDF: Combining Deep Implicit Shapes and Geometric Primitives for 3D Shape Representation and Manipulation
Deep implicit surfaces excel at modeling generic shapes but do not always capture the regularities present in manufactured objects, which is something simple geometric primitives are particularly good at.
Multiresolution Deep Implicit Functions for 3D Shape Representation
To the best of our knowledge, MDIF is the first deep implicit function model that can at the same time (1) represent different levels of detail and allow progressive decoding; (2) support both encoder-decoder inference and decoder-only latent optimization, and fulfill multiple applications; (3) perform detailed decoder-only shape completion.
A Point Cloud Generative Model via Tree-Structured Graph Convolutions for 3D Brain Shape Reconstruction
Fusing medical images and the corresponding 3D shape representation can provide complementary information and microstructure details to improve the operational performance and accuracy in brain surgery.
GarmentNets: Category-Level Pose Estimation for Garments via Canonical Space Shape Completion
By mapping the observed partial surface to the canonical space and completing it in this space, the output representation describes the garment's full configuration using a complete 3D mesh with the per-vertex canonical coordinate label.
Dual Mesh Convolutional Networks for Human Shape Correspondence
Convolutional networks have been extremely successful for regular data structures such as 2D images and 3D voxel grids.
DUDE: Deep Unsigned Distance Embeddings for Hi-Fidelity Representation of Complex 3D Surfaces
Several implicit 3D shape representation approaches using deep neural networks have been proposed leading to significant improvements in both quality of representations as well as the impact on downstream applications.
Learning Occupancy Function from Point Clouds for Surface Reconstruction
Unlike the previous methods, which predict point occupancy with fully-connected multi-layer networks, we adapt the point cloud deep learning architecture, Point Convolution Neural Network (PCNN), to build our learning model.
Training Data Generating Networks: Shape Reconstruction via Bi-level Optimization
We combine training data generating networks with bi-level optimization algorithms to obtain a complete framework for which all components can be jointly trained.
3DMaterialGAN: Learning 3D Shape Representation from Latent Space for Materials Science Applications
In the field of computer vision, unsupervised learning for 2D object generation has advanced rapidly in the past few years.