Fully Convolutional Networks for Semantic Segmentation

CVPR 2015  ·  Evan Shelhamer, Jonathan Long, Trevor Darrell ·

Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, improve on the previous best result in semantic segmentation. Our key insight is to build "fully convolutional" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves improved segmentation of PASCAL VOC (30% relative improvement to 67.2% mean IU on 2012), NYUDv2, SIFT Flow, and PASCAL-Context, while inference takes one tenth of a second for a typical image.

PDF Abstract

Results from the Paper

Ranked #2 on Semantic Segmentation on NYU Depth v2 (Mean Accuracy metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Semantic Segmentation Cityscapes test FCN Mean IoU (class) 65.3% # 93
Video Semantic Segmentation Cityscapes val FCN-50 [14] mIoU 70.1 # 8
Semantic Segmentation NYU Depth v2 FCN-32s RGB-HHA Mean Accuracy 44 # 2
Semantic Segmentation PASCAL VOC 2011 test FCN-pool4 Mean IoU 22.4 # 3
Semantic Segmentation PASCAL VOC 2011 test FCN-VGG16 Mean IoU 32 # 2
Scene Segmentation SUN-RGBD FCN Mean IoU 27.39 # 5