Multi-Scale Context Aggregation by Dilated Convolutions

23 Nov 2015  ·  Fisher Yu, Vladlen Koltun ·

State-of-the-art models for semantic segmentation are based on adaptations of convolutional networks that had originally been designed for image classification. However, dense prediction and image classification are structurally different. In this work, we develop a new convolutional network module that is specifically designed for dense prediction. The presented module uses dilated convolutions to systematically aggregate multi-scale contextual information without losing resolution. The architecture is based on the fact that dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage. We show that the presented context module increases the accuracy of state-of-the-art semantic segmentation systems. In addition, we examine the adaptation of image classification networks to dense prediction and show that simplifying the adapted network can increase accuracy.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semantic Segmentation ADE20K DilatedNet Validation mIoU 32.31 # 220
Real-Time Semantic Segmentation CamVid Dilation10 mIoU 65.3% # 23
Time (ms) 227 # 19
Frame (fps) 4.4 # 17
Semantic Segmentation CamVid Dilated Convolutions Mean IoU 65.3% # 11
Semantic Segmentation Cityscapes test Dilation10 Mean IoU (class) 67.1% # 88
Semantic Segmentation PASCAL VOC 2012 test Dilated Convolutions Mean IoU 67.6% # 42

Methods