Browse > Computer Vision > Few-Shot Semantic Segmentation

Few-Shot Semantic Segmentation Edit

6 papers with code · Computer Vision

Leaderboards Add a Result

TREND DATASET BEST METHOD PAPER TITLE PAPER CODE COMPARE

Unsupervised Deep Learning for Bayesian Brain MRI Segmentation

25 Apr 2019voxelmorph/voxelmorph

To develop a deep learning-based segmentation model for a new image dataset (e. g., of different contrast), one usually needs to create a new labeled training dataset, which can be prohibitively expensive, or rely on suboptimal ad hoc adaptation or augmentation approaches.

738

Few-shot 3D Multi-modal Medical Image Segmentation using Generative Adversarial Learning

29 Oct 2018arnab39/FewShot_GAN-Unet3D

In addition, our work presents a comprehensive analysis of different GAN architectures for semi-supervised segmentation, showing recent techniques like feature matching to yield a higher performance than conventional adversarial training approaches.

210

FSS-1000: A 1000-Class Dataset for Few-Shot Segmentation

29 Jul 2019HKUSTCV/FSS-1000

In this paper, we are interested in few-shot object segmentation where the number of annotated training examples are limited to 5 only.

106

Adaptive Masked Proxies for Few-Shot Segmentation

19 Feb 2019MSiam/AdaptiveMaskedProxies

Our method is evaluated on PASCAL-$5^i$ dataset and outperforms the state-of-the-art in the few-shot semantic segmentation.

69

On the Texture Bias for Few-Shot CNN Segmentation

9 Mar 2020rezazad68/fewshot-segmentation

Despite the initial belief that Convolutional Neural Networks (CNNs) are driven by shapes to perform visual recognition tasks, recent evidence suggests that texture bias in CNNs provides higher performing and more robust models.

39

Meta-Learning Initializations for Image Segmentation

Our primary contributions include (1), an extension and experimental analysis of first-order model agnostic meta-learning algorithms (including FOMAML and Reptile) to image segmentation, (2) a novel neural network architecture built for parameter efficiency and fast learning which we call EfficientLab, (3) a formalization of the generalization error of meta-learning algorithms, which we leverage to decrease error on unseen tasks, and (4) a small benchmark dataset, FP-k, for the empirical study of how meta-learning systems perform in both few- and many-shot settings.

0