PseudoSeg: Designing Pseudo Labels for Semantic Segmentation

Recent advances in semi-supervised learning (SSL) demonstrate that a combination of consistency regularization and pseudo-labeling can effectively improve image classification accuracy in the low-data regime. Compared to classification, semantic segmentation tasks require much more intensive labeling costs. Thus, these tasks greatly benefit from data-efficient training methods. However, structured outputs in segmentation render particular difficulties (e.g., designing pseudo-labeling and augmentation) to apply existing SSL strategies. To address this problem, we present a simple and novel re-design of pseudo-labeling to generate well-calibrated structured pseudo labels for training with unlabeled or weakly-labeled data. Our proposed pseudo-labeling strategy is network structure agnostic to apply in a one-stage consistency training framework. We demonstrate the effectiveness of the proposed pseudo-labeling strategy in both low-data and high-data regimes. Extensive experiments have validated that pseudo labels generated from wisely fusing diverse sources and strong data augmentation are crucial to consistency training for segmentation. The source code is available at https://github.com/googleinterns/wss.

PDF Abstract ICLR 2021 PDF ICLR 2021 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semi-Supervised Semantic Segmentation COCO 1/128 labeled PseudoSeg Validation mIoU 39.1 # 6
Semi-Supervised Semantic Segmentation COCO 1/256 labeled PseuodSeg Validation mIoU 37.1 # 6
Semi-Supervised Semantic Segmentation COCO 1/32 labeled PseudoSeg Validation mIoU 43.6 # 5
Semi-Supervised Semantic Segmentation COCO 1/512 labeled PseudoSeg Validation mIoU 29.8 # 6
Semi-Supervised Semantic Segmentation COCO 1/64 labeled PseudoSeg Validation mIoU 41.8 # 6

Methods


No methods listed for this paper. Add relevant methods here