Surface Normal Estimation
45 papers with code • 2 benchmarks • 5 datasets
Most implemented papers
Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-Scale Convolutional Architecture
In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling.
Maintaining Natural Image Statistics with the Contextual Loss
Maintaining natural image statistics is a crucial factor in restoration and generation of realistic looking images.
Spherical Regression: Learning Viewpoints, Surface Normals and 3D Rotations on n-Spheres
We observe many continuous output problems in computer vision are naturally contained in closed geometrical manifolds, like the Euler angles in viewpoint estimation or the normals in surface normal estimation.
Deep Iterative Surface Normal Estimation
This results in a state-of-the-art surface normal estimator that is robust to noise, outliers and point density variation, preserves sharp features through anisotropic kernels and equivariance through a local quaternion-based spatial transformer.
Scaling and Benchmarking Self-Supervised Visual Representation Learning
Self-supervised learning aims to learn representations from the data itself without explicit manual supervision.
GeoNet++: Iterative Geometric Neural Network with Edge-Aware Refinement for Joint Depth and Surface Normal Estimation
Note that GeoNet++ is generic and can be used in other depth/normal prediction frameworks to improve the quality of 3D reconstruction and pixel-wise accuracy of depth and surface normals.
DenseMTL: Cross-task Attention Mechanism for Dense Multi-task Learning
Multi-task learning has recently emerged as a promising solution for a comprehensive understanding of complex scenes.
A Large Scale Homography Benchmark
We present a large-scale dataset of Planes in 3D, Pi3D, of roughly 1000 planes observed in 10 000 images from the 1DSfM dataset, and HEB, a large-scale homography estimation benchmark leveraging Pi3D.
iDisc: Internal Discretization for Monocular Depth Estimation
Our method sets the new state of the art with significant improvements on NYU-Depth v2 and KITTI, outperforming all published methods on the official KITTI benchmark.
Independent Component Alignment for Multi-Task Learning
In this work, we propose using a condition number of a linear system of gradients as a stability criterion of an MTL optimization.