Few-Shot Semantic Segmentation
74 papers with code • 12 benchmarks • 4 datasets
Few-shot semantic segmentation (FSS) learns to segment target objects in query image given few pixel-wise annotated support image.
Libraries
Use these libraries to find Few-Shot Semantic Segmentation models and implementationsMost implemented papers
SG-One: Similarity Guidance Network for One-Shot Semantic Segmentation
In this way, the possibilities embedded in the produced similarity maps can be adapted to guide the process of segmenting objects.
Few-shot 3D Multi-modal Medical Image Segmentation using Generative Adversarial Learning
In addition, our work presents a comprehensive analysis of different GAN architectures for semi-supervised segmentation, showing recent techniques like feature matching to yield a higher performance than conventional adversarial training approaches.
Adaptive Masked Proxies for Few-Shot Segmentation
Our method is evaluated on PASCAL-$5^i$ dataset and outperforms the state-of-the-art in the few-shot semantic segmentation.
CANet: Class-Agnostic Segmentation Networks with Iterative Refinement and Attentive Few-Shot Learning
Recent progress in semantic segmentation is driven by deep Convolutional Neural Networks and large-scale labeled image datasets.
Unsupervised Deep Learning for Bayesian Brain MRI Segmentation
To develop a deep learning-based segmentation model for a new image dataset (e. g., of different contrast), one usually needs to create a new labeled training dataset, which can be prohibitively expensive, or rely on suboptimal ad hoc adaptation or augmentation approaches.
FSS-1000: A 1000-Class Dataset for Few-Shot Segmentation
In this paper, we are interested in few-shot object segmentation where the number of annotated training examples are limited to 5 only.
Feature Weighting and Boosting for Few-Shot Segmentation
Finally, the target object is segmented in the query image by using a cosine similarity between the class feature vector and the query's feature map.
AMP: Adaptive Masked Proxies for Few-Shot Segmentation
Our method is evaluated on PASCAL-5^i dataset and outperforms the state-of-the-art in the few-shot semantic segmentation.
On the Texture Bias for Few-Shot CNN Segmentation
Despite the initial belief that Convolutional Neural Networks (CNNs) are driven by shapes to perform visual recognition tasks, recent evidence suggests that texture bias in CNNs provides higher performing models when learning on large labeled training datasets.
Objectness-Aware Few-Shot Semantic Segmentation
We demonstrate how to increase overall model capacity to achieve improved performance, by introducing objectness, which is class-agnostic and so not prone to overfitting, for complementary use with class-specific features.