Paper

Domain Adaptive Medical Image Segmentation via Adversarial Learning of Disease-Specific Spatial Patterns

In medical imaging, the heterogeneity of multi-centre data impedes the applicability of deep learning-based methods and results in significant performance degradation when applying models in an unseen data domain, e.g. a new centreor a new scanner. In this paper, we propose an unsupervised domain adaptation framework for boosting image segmentation performance across multiple domains without using any manual annotations from the new target domains, but by re-calibrating the networks on few images from the target domain. To achieve this, we enforce architectures to be adaptive to new data by rejecting improbable segmentation patterns and implicitly learning through semantic and boundary information, thus to capture disease-specific spatial patterns in an adversarial optimization. The adaptation process needs continuous monitoring, however, as we cannot assume the presence of ground-truth masks for the target domain, we propose two new metrics to monitor the adaptation process, and strategies to train the segmentation algorithm in a stable fashion. We build upon well-established 2D and 3D architectures and perform extensive experiments on three cross-centre brain lesion segmentation tasks, involving multicentre public and in-house datasets. We demonstrate that recalibrating the deep networks on a few unlabeled images from the target domain improves the segmentation accuracy significantly.

Results in Papers With Code
(↓ scroll down to see all results)