Semi-Supervised Semantic Segmentation
55 papers with code • 26 benchmarks • 6 datasets
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
Without changing the network architecture, Mean Teacher achieves an error rate of 4. 35% on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels.
To leverage the unlabeled examples, we enforce a consistency between the main decoder predictions and those of the auxiliary decoders, taking as inputs different perturbed versions of the encoder's output, and consequently, improving the encoder's representations.
We propose a novel deep neural network architecture for semi-supervised semantic segmentation using heterogeneous annotations.
Deep convolutional neural networks (DCNNs) trained on a large number of images with strong pixel-level annotations have recently significantly pushed the state-of-art in semantic image segmentation.
In this paper we illustrate how to perform both visual object tracking and semi-supervised video object segmentation, in real-time, with a single simple approach.
We analyze the problem of semantic segmentation and find that its' distribution does not exhibit low density regions separating classes and offer this as an explanation for why semi-supervised segmentation is a challenging problem, with only a few reports of success.
Our approach imposes the consistency on two segmentation networks perturbed with different initialization for the same input image.