3D Shape Representation
37 papers with code • 0 benchmarks • 4 datasets
Image: MeshNet
Benchmarks
These leaderboards are used to track progress in 3D Shape Representation
Latest papers
EXIM: A Hybrid Explicit-Implicit Representation for Text-Guided 3D Shape Generation
This paper presents a new text-guided technique for generating 3D shapes.
RayDF: Neural Ray-surface Distance Fields with Multi-view Consistency
In this paper, we study the problem of continuous 3D shape representations.
ASUR3D: Arbitrary Scale Upsampling and Refinement of 3D Point Clouds using Local Occupancy Fields
Our proposed implicit occupancy representation enables efficient point classification, effectively discerning points belonging to the surface from non-surface points.
On the Localization of Ultrasound Image Slices within Point Distribution Models
We demonstrate that our multi-modal registration framework can localize images on the 3D surface topology of a patient-specific organ and the mean shape of an SSM.
Unpaired Multi-domain Attribute Translation of 3D Facial Shapes with a Square and Symmetric Geometric Map
We propose a learning framework for 3D facial attribute translation to relieve these limitations.
3D Semantic Subspace Traverser: Empowering 3D Generative Model with Shape Editing Capability
Our method utilizes implicit functions as the 3D shape representation and combines a novel latent-space GAN with a linear subspace model to discover semantic dimensions in the local latent space of 3D shapes.
3D VR Sketch Guided 3D Shape Prototyping and Exploration
3D shape modeling is labor-intensive, time-consuming, and requires years of expertise.
OpenShape: Scaling Up 3D Shape Representation Towards Open-World Understanding
Due to their alignment with CLIP embeddings, our learned shape representations can also be integrated with off-the-shelf CLIP-based models for various applications, such as point cloud captioning and point cloud-conditioned image generation.
3DShape2VecSet: A 3D Shape Representation for Neural Fields and Generative Diffusion Models
We introduce 3DShape2VecSet, a novel shape representation for neural fields designed for generative diffusion models.
SDF-StyleGAN: Implicit SDF-Based StyleGAN for 3D Shape Generation
We further complement the evaluation metrics of 3D generative models with the shading-image-based Fr\'echet inception distance (FID) scores to better assess visual quality and shape distribution of the generated shapes.