Modeling the Background for Incremental Learning in Semantic Segmentation

Despite their effectiveness in a wide range of tasks, deep architectures suffer from some important limitations. In particular, they are vulnerable to catastrophic forgetting, i.e. they perform poorly when they are required to update their model as new classes are available but the original training set is not retained. This paper addresses this problem in the context of semantic segmentation. Current strategies fail on this task because they do not consider a peculiar aspect of semantic segmentation: since each training step provides annotation only for a subset of all possible classes, pixels of the background class (i.e. pixels that do not belong to any other classes) exhibit a semantic distribution shift. In this work we revisit classical incremental learning methods, proposing a new distillation-based framework which explicitly accounts for this shift. Furthermore, we introduce a novel strategy to initialize classifier's parameters, thus preventing biased predictions toward the background class. We demonstrate the effectiveness of our approach with an extensive evaluation on the Pascal-VOC 2012 and ADE20K datasets, significantly outperforming state of the art incremental learning methods.

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Domain 11-1 Cityscapes MiB mIoU 60.0 # 3
Domain 1-1 Cityscapes MiB mIoU 42.2 # 3
Domain 11-5 Cityscapes MiB mIoU 61.5 # 3
Disjoint 15-1 PASCAL VOC 2012 MiB mIoU 39.9 # 6
Overlapped 10-1 PASCAL VOC 2012 MiB mIoU 20.1 # 10
Disjoint 15-5 PASCAL VOC 2012 MiB Mean IoU 65.9 # 5
Disjoint 10-1 PASCAL VOC 2012 MiB mIoU 6.9 # 6


No methods listed for this paper. Add relevant methods here