F3Net: Fusion, Feedback and Focus for Salient Object Detection

26 Nov 2019  ·  Jun Wei, Shuhui Wang, Qingming Huang ·

Most of existing salient object detection models have achieved great progress by aggregating multi-level features extracted from convolutional neural networks. However, because of the different receptive fields of different convolutional layers, there exists big differences between features generated by these layers. Common feature fusion strategies (addition or concatenation) ignore these differences and may cause suboptimal solutions. In this paper, we propose the F3Net to solve above problem, which mainly consists of cross feature module (CFM) and cascaded feedback decoder (CFD) trained by minimizing a new pixel position aware loss (PPA). Specifically, CFM aims to selectively aggregate multi-level features. Different from addition and concatenation, CFM adaptively selects complementary components from input features before fusion, which can effectively avoid introducing too much redundant information that may destroy the original features. Besides, CFD adopts a multi-stage feedback mechanism, where features closed to supervision will be introduced to the output of previous layers to supplement them and eliminate the differences between features. These refined features will go through multiple similar iterations before generating the final saliency maps. Furthermore, different from binary cross entropy, the proposed PPA loss doesn't treat pixels equally, which can synthesize the local structure information of a pixel to guide the network to focus more on local details. Hard pixels from boundaries or error-prone parts will be given more attention to emphasize their importance. F3Net is able to segment salient object regions accurately and provide clear local details. Comprehensive experiments on five benchmark datasets demonstrate that F3Net outperforms state-of-the-art approaches on six evaluation metrics.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Dichotomous Image Segmentation DIS-TE1 F3Net max F-Measure 0.640 # 13
weighted F-measure 0.549 # 13
MAE 0.095 # 13
S-Measure 0.721 # 13
E-measure 0.783 # 14
HCE 244 # 11
Dichotomous Image Segmentation DIS-TE2 F3Net max F-Measure 0.712 # 13
weighted F-measure 0.620 # 13
MAE 0.097 # 13
S-Measure 0.755 # 13
E-measure 0.820 # 14
HCE 542 # 11
Dichotomous Image Segmentation DIS-TE3 F3Net max F-Measure 0.743 # 14
weighted F-measure 0.656 # 13
MAE 0.092 # 11
S-Measure 0.773 # 12
E-measure 0.848 # 13
HCE 1059 # 13
Dichotomous Image Segmentation DIS-TE4 F3Net max F-Measure 0.721 # 15
weighted F-measure 0.633 # 13
MAE 0.107 # 12
S-Measure 0.752 # 15
E-measure 0.825 # 12
HCE 3760 # 13
Dichotomous Image Segmentation DIS-VD F3Net max F-Measure 0.685 # 14
weighted F-measure 0.595 # 13
MAE 0.107 # 13
S-Measure 0.733 # 14
E-measure 0.800 # 13
HCE 1567 # 13
Salient Object Detection DUT-OMRON F3Net max_F1 0.813 # 3
MAE 0.052 # 4
E-measure 0.869 # 3
S-measure 0.838 # 2
Salient Object Detection DUTS-TE F3Net MAE 0.035 # 4
max_F1 0.891 # 3
E-measure 0.901 # 4
S-measure 0.888 # 2
Salient Object Detection ECSSD F3Net MAE 0.033 # 2
max_F1 0.945 # 3
S-measure 0.924 # 2
E-measure 0.927 # 2
Salient Object Detection HKU-IS F3Net MAE 0.028 # 4
E-measure 0.952 # 4
max_F1 0.936 # 4
S-measure 0.917 # 4
Salient Object Detection PASCAL-S F3Net MAE 0.061 # 4
max_F1 0.871 # 4
S-measure 0.854 # 3
E-measure 0.858 # 3

Methods


No methods listed for this paper. Add relevant methods here