3D Shape Reconstruction from Videos
5 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in 3D Shape Reconstruction from Videos
Most implemented papers
Reconstructing Animatable Categories from Videos
Building animatable 3D models is challenging due to the need for 3D scans, laborious registration, and manual rigging, which are difficult to scale to arbitrary categories.
LASR: Learning Articulated Shape Reconstruction from a Monocular Video
Remarkable progress has been made in 3D reconstruction of rigid structures from a video or a collection of images.
ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction
The surface embeddings are implemented as coordinate-based MLPs that are fit to each video via consistency and contrastive reconstruction losses. Experimental results show that ViSER compares favorably against prior work on challenging videos of humans with loose clothing and unusual poses as well as animals videos from DAVIS and YTVOS.
BANMo: Building Animatable 3D Neural Models from Many Casual Videos
Our key insight is to merge three schools of thought; (1) classic deformable shape models that make use of articulated bones and blend skinning, (2) volumetric neural radiance fields (NeRFs) that are amenable to gradient-based optimization, and (3) canonical embeddings that generate correspondences between pixels and an articulated model.
DeciWatch: A Simple Baseline for 10x Efficient 2D and 3D Pose Estimation
This paper proposes a simple baseline framework for video-based 2D/3D human pose estimation that can achieve 10 times efficiency improvement over existing works without any performance degradation, named DeciWatch.