CutDepth:Edge-aware Data Augmentation in Depth Estimation

16 Jul 2021  ·  Yasunori Ishii, Takayoshi Yamashita ·

It is difficult to collect data on a large scale in a monocular depth estimation because the task requires the simultaneous acquisition of RGB images and depths. Data augmentation is thus important to this task. However, there has been little research on data augmentation for tasks such as monocular depth estimation, where the transformation is performed pixel by pixel. In this paper, we propose a data augmentation method, called CutDepth. In CutDepth, part of the depth is pasted onto an input image during training. The method extends variations data without destroying edge features. Experiments objectively and subjectively show that the proposed method outperforms conventional methods of data augmentation. The estimation accuracy is improved with CutDepth even though there are few training data at long distances.

PDF Abstract

Datasets


Results from the Paper


Ranked #43 on Monocular Depth Estimation on NYU-Depth V2 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Monocular Depth Estimation NYU-Depth V2 CutDepth RMSE 0.375 # 43
absolute relative error 0.104 # 41
Delta < 1.25 0.899 # 42
Delta < 1.25^2 0.985 # 37
Delta < 1.25^3 0.997 # 26
log 10 0.044 # 40

Methods


No methods listed for this paper. Add relevant methods here