EF-Net: A novel enhancement and fusion network for RGB-D saliency detection

4 Nov 2020  ·  Qian Chen, Keren Fu, Ze Liu, Geng Chen, Hongwei Du, Bensheng Qiu, LingShao ·

Salient object detection (SOD) has gained tremendous attention in the field of computer vision. Multi-modal SOD based on the complementary information from RGB images and depth maps has shown remarkable success, making RGB-D saliency detection an active research topic. In this paper, we propose a novel multi-modal enhancement and fusion network (EF-Net) for effective RGB-D saliency detection. Specifically, we first utilize a color hint map module with RGB images to predict a hint map, which encodes the coarse information of salient objects. The resulting hint map is then utilized to enhance the depth map with our depth enhancement module, which suppresses the noise and sharpens the object boundary. Finally, we propose an effective layer-wise aggregation module to fuse the features extracted from the enhanced depth maps and RGB images for the accurate detection of salient objects. Our EF-Net utilizes an enhancement-and-fusion framework for saliency detection, which makes full use of the information from RGB images and depth maps. In addition, our depth enhancement module effectively resolves the low-quality issue of depth maps, which boosts the saliency detection performance remarkably. Extensive experiments on five widely-used benchmark datasets demonstrate that our method outperforms 12 state-of-the-art RGB-D saliency detection approaches in terms of five key evaluation metrics.



  Add Datasets introduced or used in this paper

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.


No methods listed for this paper. Add relevant methods here