MSFNet:Multi-scale features network for monocular depth estimation

14 Jul 2021  ·  Meiqi Pei ·

In recent years, monocular depth estimation is applied to understand the surrounding 3D environment and has made great progress. However, there is an ill-posed problem on how to gain depth information directly from a single image. With the rapid development of deep learning, this problem is possible to be solved. Although more and more approaches are proposed one after another, most of existing methods inevitably lost details due to continuous downsampling when mapping from RGB space to depth space. To the end, we design a Multi-scale Features Network (MSFNet), which consists of Enhanced Diverse Attention (EDA) module and Upsample-Stage Fusion (USF) module. The EDA module employs the spatial attention method to learn significant spatial information, while USF module complements low-level detail information with high-level semantic information from the perspective of multi-scale feature fusion to improve the predicted effect. In addition, since the simple samples are always trained to a better effect first, the hard samples are difficult to converge. Therefore, we design a batch-loss to assign large loss factors to the harder samples in a batch. Experiments on NYU-Depth V2 dataset and KITTI dataset demonstrate that our proposed approach is more competitive with the state-of-the-art methods in both qualitative and quantitative evaluation.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here