Wide Residual Networks

23 May 2016Sergey ZagoruykoNikos Komodakis

Deep residual networks were shown to be able to scale up to thousands of layers and still have improving performance. However, each fraction of a percent of improved accuracy costs nearly doubling the number of layers, and so training very deep residual networks has a problem of diminishing feature reuse, which makes these networks very slow to train... (read more)

PDF Abstract

Evaluation results from the paper


Task Dataset Model Metric name Metric value Global rank Compare
Image Classification CIFAR-10 Wide ResNet Percentage correct 96.11 # 17
Image Classification CIFAR-10 Wide ResNet Percentage error 3.89 # 12
Image Classification CIFAR-100 Wide ResNet Percentage correct 81.15 # 11
Image Classification CIFAR-100 Wide ResNet Percentage error 18.85 # 5
Image Classification ImageNet WRN-50-2-bottleneck Top 1 Accuracy 78.1% # 44
Image Classification ImageNet WRN-50-2-bottleneck Top 5 Accuracy 93.97% # 36
Image Classification ImageNet WRN-50-2-bottleneck Number of params 68.9M # 1
Image Classification SVHN Wide Residual Networks Percentage error 1.54 # 5
Image Classification SVHN Wide ResNet Percentage error 1.7 # 9