REFINE: Prediction Fusion Network for Panoptic Segmentation

Panoptic segmentation aims at generating pixel-wise class and instance predictions for each pixel in the input image, which is a challenging task and far more complicated than naively fusing the semantic and instance segmentation results. Prediction fusion is therefore important to achieve accurate panoptic segmentation. In this paper, we present REFINE, pREdiction FusIon NEtwork for panoptic segmentation, to achieve high-quality panoptic segmentation by improving cross-task prediction fusion, and within-task prediction fusion. Our single-model ResNeXt-101 with DCN achieves PQ=51.5 on the COCO dataset, surpassing state-of-the-art performance by a convincing margin and is comparable with ensembled models. Our smaller model with a ResNet-50 backbone achieves PQ=44.9, which is comparable with state-of-the-art methods with larger backbones.

PDF Abstract


Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Panoptic Segmentation COCO test-dev REFINE (ResNet-101-DCN) PQ 49.6 # 16
PQst 37.7 # 17
PQth 57.5 # 11
Panoptic Segmentation COCO test-dev REFINE (ResNeXt-101-DCN) PQ 51.5 # 11
PQst 39.2 # 13
PQth 59.6 # 7


No methods listed for this paper. Add relevant methods here