Select, Supplement and Focus for RGB-D Saliency Detection

Depth data containing a preponderance of discriminative power in location have been proven beneficial for accurate saliency prediction. However, RGB-D saliency detection methods are also negatively influenced by randomly distributed erroneous or missing regions on the depth map or along the object boundaries. This offers the possibility of achieving more effective inference by well designed models. In this paper, we propose a new framework for accurate RGB-D saliency detection taking account of local and global complementarities from two modalities. This is achieved by designing a complimentary interaction model discriminative enough to simultaneously select useful representation from RGB and depth data, and meanwhile to refine the object boundaries. Moreover, we proposed a compensation-aware loss to further process the information not being considered in the complimentary interaction model, leading to improvement of the generalization ability for challenging scenes. Experiments on six public datasets show that our method outperforms18state-of-the-art methods.

PDF Abstract

Results from the Paper


Ranked #15 on RGB-D Salient Object Detection on NJU2K (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
RGB-D Salient Object Detection NJU2K SSF S-Measure 89.9 # 15
Average MAE 0.043 # 11
Thermal Image Segmentation RGB-T-Glass-Segmentation SSF MAE 0.097 # 18

Methods


No methods listed for this paper. Add relevant methods here