SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation

2 Nov 2015  ·  Vijay Badrinarayanan, Alex Kendall, Roberto Cipolla ·

We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network. The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN and also with the well known DeepLab-LargeFOV, DeconvNet architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. We show that SegNet provides good performance with competitive inference time and more efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet/.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semantic Segmentation ADE20K SegNet Validation mIoU 21.64 # 222
Lesion Segmentation Anatomical Tracings of Lesions After Stroke (ATLAS) SegNet Dice 0.2767 # 3
IoU 0.1911 # 3
Precision 0.3938 # 3
Recall 0.2532 # 3
Semantic Segmentation CamVid SegNet Mean IoU 46.4% # 19
Real-Time Semantic Segmentation CamVid SegNet mIoU 46.4% # 26
Time (ms) 217 # 18
Frame (fps) 4.6 # 16
Semantic Segmentation Cityscapes test SegNet Mean IoU (class) 57.0% # 100
Thermal Image Segmentation MFN Dataset SegNet mIOU 42.3 # 45
Medical Image Segmentation RITE SegNet Dice 52.23 # 3
Jaccard Index 39.14 # 2
Semantic Segmentation SkyScapes-Dense SegNet Mean IoU 23.14 # 6
Scene Segmentation SUN-RGBD SegNet Mean IoU 31.84 # 4
Lesion Segmentation University of Waterloo skin cancer database SegNet Dice score 0.854 ±0.088 # 4

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Crowd Counting UCF-QNRF Encoder-Decoder MAE 270 # 17

Methods