35 papers with code • 8 benchmarks • 5 datasets
RGB-D Salient object detection (SOD) aims at distinguishing the most visually distinctive objects or regions in a scene from the given RGB and Depth data. It has a wide range of applications, including video/image segmentation, object recognition, visual tracking, foreground maps evaluation, image retrieval, content-aware image editing, information discovery, photosynthesis, and weakly supervised semantic segmentation. Here, depth information plays an important complementary role in finding salient objects. Online benchmark: http://dpfan.net/d3netbenchmark.
Our framework includes two main models: 1) a generator model, which maps the input image and latent variable to stochastic saliency prediction, and 2) an inference model, which gradually updates the latent variable by sampling it from the true or approximate posterior distribution.
Ranked #1 on RGB Salient Object Detection on HKU-IS
In this paper, we propose the first framework (UCNet) to employ uncertainty for RGB-D saliency detection by learning from the data labeling process.
Ranked #3 on RGB-D Salient Object Detection on STERE
The use of RGB-D information for salient object detection has been extensively explored in recent years.
Ranked #4 on RGB-D Salient Object Detection on SSD
Inspired by the observation that RGB and depth modalities actually present certain commonality in distinguishing salient objects, a novel joint learning and densely cooperative fusion (JL-DCF) architecture is designed to learn from both RGB and depth inputs through a shared network backbone, known as the Siamese architecture.
Ranked #1 on RGB-D Salient Object Detection on SIP (using extra training data)
The large availability of depth sensors provides valuable complementary information for salient object detection (SOD) in RGBD images.
Ranked #5 on RGB-D Salient Object Detection on LFSD
In this work, we propose a novel depth-induced multi-scale recurrent attention network for saliency detection.
Ranked #11 on RGB-D Salient Object Detection on NJU2K (using extra training data)
The main purpose of RGB-D salient object detection (SOD) is how to better integrate and utilize cross-modal fusion information.
Ranked #3 on RGB-D Salient Object Detection on NJU2K
This paper proposes a novel joint learning and densely-cooperative fusion (JL-DCF) architecture for RGB-D salient object detection.
Ranked #4 on RGB-D Salient Object Detection on SIP
In particular, we 1) propose a bifurcated backbone strategy (BBS) to split the multi-level features into teacher and student features, and 2) utilize a depth-enhanced module (DEM) to excavate informative parts of depth cues from the channel and spatial views.