3D Shape Representation
37 papers with code • 0 benchmarks • 4 datasets
Image: MeshNet
Benchmarks
These leaderboards are used to track progress in 3D Shape Representation
Latest papers
Learning Deep Implicit Functions for 3D Shapes with Dynamic Code Clouds
However, the local codes are constrained at discrete and regular positions like grid points, which makes the code positions difficult to be optimized and limits their representation ability.
PolyNet: Polynomial Neural Network for 3D Shape Recognition with PolyShape Representation
3D shape representation and its processing have substantial effects on 3D shape recognition.
3DIAS: 3D Shape Reconstruction with Implicit Algebraic Surfaces
Our experiments demonstrate the superiorities of our method in terms of representation power compared to the state-of-the-art methods in single RGB image 3D shape reconstruction.
Learning Canonical View Representation for 3D Shape Recognition with Arbitrary Views
In this way, each 3D shape with arbitrary views is represented by a fixed number of canonical view features, which are further aggregated to generate a rich and robust 3D shape representation for shape recognition.
ACORN: Adaptive Coordinate Networks for Neural Scene Representation
Here, we introduce a new hybrid implicit-explicit network architecture and training strategy that adaptively allocates resources during training and inference based on the local complexity of a signal of interest.
Objectron: A Large Scale Dataset of Object-Centric Videos in the Wild with Pose Annotations
3D object detection has recently become popular due to many applications in robotics, augmented reality, autonomy, and image retrieval.
Deep Implicit Templates for 3D Shape Representation
Deep implicit functions (DIFs), as a kind of 3D shape representation, are becoming more and more popular in the 3D vision community due to their compactness and strong representation power.
On the Effectiveness of Weight-Encoded Neural Implicit 3D Shapes
Many prior works have focused on _latent-encoded_ neural implicits, where a latent vector encoding of a specific shape is also fed as input.
Novel View Synthesis from Single Images via Point Cloud Transformation
In this paper the argument is made that for true novel view synthesis of objects, where the object can be synthesized from any viewpoint, an explicit 3D shape representation isdesired.
GSNet: Joint Vehicle Pose and Shape Reconstruction with Geometrical and Scene-aware Supervision
GSNet utilizes a unique four-way feature extraction and fusion scheme and directly regresses 6DoF poses and shapes in a single forward pass.