Paper

Generalizable Model-agnostic Semantic Segmentation via Target-specific Normalization

Semantic segmentation in a supervised learning manner has achieved significant progress in recent years. However, its performance usually drops dramatically due to the data-distribution discrepancy between seen and unseen domains when we directly deploy the trained model to segment the images of unseen (or new coming) domains. To this end, we propose a novel domain generalization framework for the generalizable semantic segmentation task, which enhances the generalization ability of the model from two different views, including the training paradigm and the test strategy. Concretely, we exploit the model-agnostic learning to simulate the domain shift problem, which deals with the domain generalization from the training scheme perspective. Besides, considering the data-distribution discrepancy between seen source and unseen target domains, we develop the target-specific normalization scheme to enhance the generalization ability. Furthermore, when images come one by one in the test stage, we design the image-based memory bank (Image Bank in short) with style-based selection policy to select similar images to obtain more accurate statistics of normalization. Extensive experiments highlight that the proposed method produces state-of-the-art performance for the domain generalization of semantic segmentation on multiple benchmark segmentation datasets, i.e., Cityscapes, Mapillary.

Results in Papers With Code
(↓ scroll down to see all results)