Cut out the annotator, keep the cutout: better segmentation with weak supervision

Constructing large labeled datasets for training segmentation models is an expensive and labor-intensive process. This is a common challenge in machine learning, addressed by methods that require few or no labeled data points such as few-shot learning (FSL) and weakly-supervised learning (WS). Such techniques, however, have limitations when applied to image segmentation---it is difficult to inject multiple forms of knowledge into FSL and other limited label learning techniques, while WS models struggle to fully exploit rich image information. We propose a framework that fuses limited label learning and weak supervision for segmentation tasks, enabling users to train high-performing segmentation CNNs with very few hand-labeled training points. We use CNNs trained with limited labels as weak sources, requiring a very small set of reference labeled images, and introduce a new WS model that focuses on key areas---areas with contention among noisy labels---of the image to fuse these weak sources. Empirically, we evaluate our proposed approach over seven well-motivated segmentation tasks. We show that our methods can achieve within 2 Dice points compared to fully supervised networks while only requiring five hand-labeled training points. Compared to existing limited label supervision methods, including solo few-shot and data augmentation approaches, our approach improves performance by a mean 5 Dice points of the next best method. Finally, we explore the tradeoffs of these various supervision methods, including how well each approach generalizes to new test sets and how well each approach leverages additional unlabeled training data.

PDF Abstract


  Add Datasets introduced or used in this paper

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.


No methods listed for this paper. Add relevant methods here