We present As-Plausible-as-Possible (APAP) mesh deformation technique that leverages 2D diffusion priors to preserve the plausibility of a mesh under user-controlled deformation.
Deep generative models of 3D shapes often feature continuous latent spaces that can, in principle, be used to explore potential variations starting from a set of input shapes.
The recent proliferation of 3D content that can be consumed on hand-held devices necessitates efficient tools for transmitting large geometric data, e. g., 3D meshes, over the Internet.
Our key observation is that Jacobians are a representation that favors smoother, large deformations, leading to a global relation between vertices and pixels, and avoiding localized noisy gradients.
We present a neural technique for learning to select a local sub-region around a point which can be used for mesh parameterization.
Assisting design with data-driven machine learning methods is hampered by lack of labeled data in CAD's native format; the parametric boundary representation (B-Rep).
Our key insight is to copy and deform patches from the partial input to complete missing regions.
We demonstrate that this improves the quality of the learned surface representation, as well as its consistency in a collection of related shapes.
This paper introduces a framework designed to accurately predict piecewise linear mappings of arbitrary meshes via a neural network, enabling training and evaluating over heterogeneous collections of meshes that do not share a triangulation, as well as producing highly detail-preserving maps whose accuracy exceeds current state of the art.
Our pipeline and architecture are designed so that disentanglement of global geometry from local details is accomplished through optimization, in a completely unsupervised manner.
We present an end-to-end method to learn the proximal operator of a family of training problems so that multiple local minima can be quickly obtained from initial guesses by iterating the learned operator, emulating the proximal-point algorithm that has fast convergence.
M\"obius transformations play an important role in both geometry and spherical image processing - they are the group of conformal automorphisms of 2D surfaces and the spherical equivalent of homographies.
The key to making these correspondences semantically meaningful is to guarantee that the metric tensors computed at corresponding points are as similar as possible.
Ranked #1 on Surface Reconstruction on ANIM
To train our system, we compiled the first large scale dataset of BREP CAD assemblies, which we are releasing along with benchmark mate prediction tasks.
We propose a method for the unsupervised reconstruction of a temporally-coherent sequence of surfaces from a sequence of time-evolving point clouds, yielding dense, semantically meaningful correspondences between all keyframes.
We present a novel surface convolution operator acting on vector fields that is based on a simple observation: instead of combining neighboring features with respect to a single coordinate parameterization defined at a given point, we have every neighbor describe the position of the point within its own coordinate frame.
In fact, we use the embedding space to guide the shape pairs used to train the deformation module, so that it invests its capacity in learning deformations between meaningful shape pairs.
We introduce COALESCE, the first data-driven framework for component-based shape assembly which employs deep learning to synthesize part connections.
A point cloud can be rotated in infinitely many ways, which provides a rich label-free source for self-supervision.
Ranked #8 on 3D Point Cloud Linear Classification on ModelNet40
We propose a novel neural architecture for representing 3D surfaces, which harnesses two complementary shape representations: (i) an explicit representation via an atlas, i. e., embeddings of 2D domains into 3D; (ii) an implicit-function representation, i. e., a scalar function over the 3D volume, with its levels denoting surfaces.
During inference, our method takes a coarse triangle mesh as input and recursively subdivides it to a finer geometry by applying the fixed topological updates of Loop Subdivision, but predicting vertex positions using a neural network conditioned on the local geometry of a patch.
Affinity graphs are widely used in deep architectures, including graph convolutional neural networks and attention networks.
The goal of our method is to warp a source shape to match the general structure of a target shape, while preserving the surface details of the source.
We capture these subtle changes by applying an image translation network to refine the mesh rendering, providing an end-to-end model to generate new animations of a character with high visual quality.
We propose to represent shapes as the deformation and combination of learnable elementary 3D structures, which are primitives resulting from training over a collection of shape.
Ranked #7 on 3D Dense Shape Correspondence on SHREC'19 (using extra training data)
Many tasks in graphics and vision demand machinery for converting shapes into consistent representations with sparse sets of parameters; these representations facilitate rendering, editing, and storage.
In this paper, we address the problem of 3D object mesh reconstruction from RGB videos.
Unfortunately, only a small fraction of shapes in 3D repositories are labeled with physical mate- rials, posing a challenge for learning methods.
Modeling relations between components of 3D objects is essential for many geometry editing tasks.
By predicting this feature for a new shape, we implicitly predict correspondences between this shape and the template.
Ranked #8 on 3D Dense Shape Correspondence on SHREC'19 (using extra training data)
We introduce a method for learning to generate the surface of 3D shapes.
Ranked #5 on Point Cloud Completion on Completion3D
We test our method on segmentation benchmarks and show that even with weak supervision of whole shape tags, our method can infer meaningful semantic regions, without ever observing shape segmentations.
The combinatorial nature of part arrangements poses another challenge, since the retrieval network is not a function: several complements can be appropriate for the same input.
We present a new local descriptor for 3D shapes, directly applicable to a wide range of shape analysis problems such as point correspondences, semantic segmentation, affordance prediction, and shape-to-scan matching.