Neural implicit representations, which encode a surface as the level set of a neural network applied to spatial coordinates, have proven to be remarkably effective for optimizing, compressing, and generating 3D geometry.
To train NSM, we present a self-supervised data collection pipeline that generates pairwise shape assembly data with ground truth by randomly cutting an object mesh into two parts, resulting in a dataset that consists of 19, 226 shape assembly pairs with numerous object meshes and diverse cut types.
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs, while achieving state-of-the-art geometry reconstruction quality.
Our method does not require a particular type of rig and adds secondary effects to skeletal animations, cage-based deformations, wire deformers, motion capture data, and rigid-body simulations.
We introduce Deformable Tetrahedral Meshes (DefTet) as a particular parameterization that utilizes volumetric tetrahedral meshes for the reconstruction problem.
Many prior works have focused on _latent-encoded_ neural implicits, where a latent vector encoding of a specific shape is also fed as input.
During inference, our method takes a coarse triangle mesh as input and recursively subdivides it to a finer geometry by applying the fixed topological updates of Loop Subdivision, but predicting vertex positions using a neural network conditioned on the local geometry of a patch.
In this technical report, we investigate efficient representations of articulated objects (e. g. human bodies), which is an important problem in computer vision and graphics.
Many machine learning models operate on images, but ignore the fact that images are 2D projections formed by 3D geometry interacting with light, in a process called rendering.
Ranked #6 on Single-View 3D Reconstruction on ShapeNet
As such, we propose the direct perturbation of physical parameters that underly image formation: lighting and geometry.