We present a prior for manifold structured data, such as surfaces of 3D shapes, where deep neural networks are adopted to reconstruct a target shape using gradient descent starting from a random initialization.
We present a generative model to synthesize 3D shapes as sets of handles -- lightweight proxies that approximate the original 3D shape -- for applications in interactive editing, shape parsing, and building compact 3D representations.
The problems of shape classification and part segmentation from 3D point clouds have garnered increasing attention in the last few years.
We investigate the problem of reconstructing shapes from noisy and incomplete projections in the presence of viewpoint uncertainities.
To this end, we present new differentiable projection operators that can be used by PrGAN to learn better 3D generative models.
We investigate the role of representations and architectures for classifying 3D shapes in terms of their computational efficiency, generalization, and robustness to adversarial transformations.
The decoder converts this representation into depth and normal maps capturing the underlying surface from several output viewpoints.
We propose to use the expressive power of neural networks to learn a distribution over the shape coefficients in a generative-adversarial framework.
In this paper we investigate the problem of inducing a distribution over three-dimensional structures given two-dimensional views of multiple objects taken from unknown viewpoints.