LightDepth: A Resource Efficient Depth Estimation Approach for Dealing with Ground Truth Sparsity via Curriculum Learning

16 Nov 2022  ·  Fatemeh Karimi, Amir Mehrpanah, Reza Rawassizadeh ·

Advances in neural networks enable tackling complex computer vision tasks such as depth estimation of outdoor scenes at unprecedented accuracy. Promising research has been done on depth estimation. However, current efforts are computationally resource-intensive and do not consider the resource constraints of autonomous devices, such as robots and drones. In this work, we present a fast and battery-efficient approach for depth estimation. Our approach devises model-agnostic curriculum-based learning for depth estimation. Our experiments show that the accuracy of our model performs on par with the state-of-the-art models, while its response time outperforms other models by 71%. All codes are available online at

PDF Abstract


Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Depth Estimation KITTI Eigen split LightDepth Number of parameters (M) 42.6 # 1
Monocular Depth Estimation KITTI Eigen split LightDepth absolute relative error 0.070 # 23
RMSE 2.923 # 23
GFlops 42.8 # 1
Battery 96.42 # 1
Runtime (s) 16.07 # 1


No methods listed for this paper. Add relevant methods here