Paper

C$^{4}$Net: Contextual Compression and Complementary Combination Network for Salient Object Detection

Deep learning solutions of the salient object detection problem have achieved great results in recent years. The majority of these models are based on encoders and decoders, with a different multi-feature combination. In this paper, we show that feature concatenation works better than other combination methods like multiplication or addition. Also, joint feature learning gives better results, because of the information sharing during their processing. We designed a Complementary Extraction Module (CEM) to extract necessary features with edge preservation. Our proposed Excessiveness Loss (EL) function helps to reduce false-positive predictions and purifies the edges with other weighted loss functions. Our designed Pyramid-Semantic Module (PSM) with Global guiding flow (G) makes the prediction more accurate by providing high-level complementary information to shallower layers. Experimental results show that the proposed model outperforms the state-of-the-art methods on all benchmark datasets under three evaluation metrics.

Results in Papers With Code
(↓ scroll down to see all results)