UASOL, a large-scale high-resolution outdoor stereo dataset

In this paper, we propose a new dataset for outdoor depth estimation from single and stereo RGB images. The dataset was acquired from the point of view of a pedestrian. Currently, the most novel approaches take advantage of deep learning-based techniques, which have proven to outperform traditional state-of-the-art computer vision methods. Nonetheless, these methods require large amounts of reliable ground-truth data. Despite their already existing several datasets that could be used for depth estimation, almost none of them are outdoor-oriented from an egocentric point of view. Our dataset introduces a large number of high-definition pairs of color frames and corresponding depth maps from a human perspective. In addition, the proposed dataset also features human interaction and great variability of data, as shown in this work.

PDF Abstract

Datasets


Introduced in the Paper:

UASOL

Used in the Paper:

SYNTHIA Make3D ETH3D

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Monocular Depth Estimation UASOL FCRN-DepthPrediction from Iro Laina et al. (2016) RMSE 8.119 # 1

Methods


No methods listed for this paper. Add relevant methods here