Depth Quality Aware Salient Object Detection

7 Aug 2020  ·  Chenglizhao Chen, Jipeng Wei, Chong Peng, Hong Qin ·

The existing fusion based RGB-D salient object detection methods usually adopt the bi-stream structure to strike the fusion trade-off between RGB and depth (D). The D quality usually varies from scene to scene, while the SOTA bi-stream approaches are depth quality unaware, which easily result in substantial difficulties in achieving complementary fusion status between RGB and D, leading to poor fusion results in facing of low-quality D. Thus, this paper attempts to integrate a novel depth quality aware subnet into the classic bi-stream structure, aiming to assess the depth quality before conducting the selective RGB-D fusion. Compared with the SOTA bi-stream methods, the major highlight of our method is its ability to lessen the importance of those low-quality, no-contribution, or even negative-contribution D regions during the RGB-D fusion, achieving a much improved complementary status between RGB and D.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
RGB-D Salient Object Detection NJU2K DQSD-VGG19 S-Measure 89.7 # 16
Average MAE 0.052 # 21

Methods


No methods listed for this paper. Add relevant methods here