One-Shot Segmentation in Clutter

We tackle the problem of one-shot segmentation: finding and segmenting a previously unseen object in a cluttered scene based on a single instruction example. We propose a novel dataset, which we call $\textit{cluttered Omniglot}$. Using a baseline architecture combining a Siamese embedding for detection with a U-net for segmentation we show that increasing levels of clutter make the task progressively harder. Using oracle models with access to various amounts of ground-truth information, we evaluate different aspects of the problem and show that in this kind of visual search task, detection and segmentation are two intertwined problems, the solution to each of which helps solving the other. We therefore introduce $\textit{MaskNet}$, an improved model that attends to multiple candidate locations, generates segmentation proposals to mask out background clutter and selects among the segmented objects. Our findings suggest that such image recognition models based on an iterative refinement of object detection and foreground segmentation may provide a way to deal with highly cluttered scenes.

PDF Abstract ICML 2018 PDF ICML 2018 Abstract

Datasets


Introduced in the Paper:

Cluttered Omniglot

Used in the Paper:

ADE20K Omniglot
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
One-Shot Segmentation Cluttered Omniglot MaskNet IoU [32 distractors] 65.6 # 1
IoU [4 distractors] 95.8 # 2
IoU [256 distractors] 43.7 # 1
One-Shot Segmentation Cluttered Omniglot Siamese-U-Net IoU [32 distractors] 62.4 # 2
IoU [4 distractors] 97.1 # 1
IoU [256 distractors] 38.4 # 2

Methods