UniK3D: Universal Camera Monocular 3D Estimation

Monocular 3D estimation is crucial for visual perception. However, current methods fall short by relying on oversimplified assumptions, such as pinhole camera models or rectified images. These limitations severely restrict their general applicability, causing poor performance in real-world scenarios with fisheye or panoramic images and resulting in substantial context loss. To address this, we present UniK3D, the first generalizable method for monocular 3D estimation able to model any camera. Our method introduces a spherical 3D representation which allows for better disentanglement of camera and scene geometry and enables accurate metric 3D reconstruction for unconstrained camera models. Our camera component features a novel, model-independent representation of the pencil of rays, achieved through a learned superposition of spherical harmonics. We also introduce an angular loss, which, together with the camera module design, prevents the contraction of the 3D outputs for wide-view cameras. A comprehensive zero-shot evaluation on 13 diverse datasets demonstrates the state-of-the-art performance of UniK3D across 3D, depth, and camera metrics, with substantial gains in challenging large-field-of-view and panoramic settings, while maintaining top accuracy in conventional pinhole small-field-of-view domains. Code and models are available at github.com/lpiccinelli-eth/unik3d .

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Monocular Depth Estimation KITTI Eigen split UniK3D (FT, metric) absolute relative error 0.037 # 2
RMSE 1.68 # 2
RMSE log 0.060 # 3
Delta < 1.25 0.990 # 1
Delta < 1.25^2 0.998 # 2
Delta < 1.25^3 0.999 # 14
Monocular Depth Estimation NYU-Depth V2 UniK3D (FT, metric) RMSE 0.173 # 2
absolute relative error 0.044 # 3
Delta < 1.25 0.989 # 1
Delta < 1.25^2 0.998 # 2
Delta < 1.25^3 1.000 # 1
log 10 0.019 # 1

Methods


No methods listed for this paper. Add relevant methods here