Humans can naturally and effectively find salient regions in complex scenes.
Our approach can cooperate with various existing U-shape-based salient object detection methods by substituting the connections between the bottom-up and top-down pathways.
Recent advances on CNNs are mostly devoted to designing more complex architectures to enhance their representation learning capacity.
To evaluate the performance of our proposed network on these tasks, we conduct exhaustive experiments on multiple representative datasets.
In the second step, we integrate the local edge information and global location information to obtain the salient edge features.
We further design a feature aggregation module (FAM) to make the coarse-level semantic information well fused with the fine-level features from the top-down pathway.
Ranked #1 on RGB Salient Object Detection on SOD
Although these tasks are inherently very different, we show that our unified approach performs very well on all of them and works far better than current single-purpose state-of-the-art methods.
In this paper, we improve semantic segmentation by automatically learning from Flickr images associated with a particular keyword, without relying on any explicit user annotations, thus substantially alleviating the dependence on accurate annotations when compared to previous weakly supervised methods.
Our analysis identifies a serious design bias of existing SOD datasets which assumes that each image contains at least one clearly outstanding salient object in low clutter.