Unsupervised pre-training has been proven as an effective approach to boost various downstream tasks given limited labeled data.
This work considers semi-supervised segmentation as a dense prediction problem based on prototype vector correlation and proposes a simple way to represent each segmentation class with multiple prototypes.
Pre-training a recognition model with contrastive learning on a large dataset of unlabeled data has shown great potential to boost the performance of a downstream task, e. g., image classification.
Despite their outstanding accuracy, semi-supervised segmentation methods based on deep neural networks can still yield predictions that are considered anatomically impossible by clinicians, for instance, containing holes or disconnected regions.
In this method, we maximize the MI for intermediate feature embeddings that are taken from both the encoder and decoder of a segmentation network.
Moreover, to encourage predictions from different networks to be both consistent and confident, we enhance this generalized JSD loss with an uncertainty regularizer based on entropy.
The scarcity of labeled data often limits the application of deep learning to medical image segmentation.
The second, named Invariant Information Clustering (IIC), maximizes the mutual information between the clustering of a sample and its geometrically transformed version.
An efficient strategy for weakly-supervised segmentation is to impose constraints or regularization priors on target regions.
In this paper, we aim to improve the performance of semantic image segmentation in a semi-supervised setting in which training is effectuated with a reduced set of annotated images and additional non-annotated images.