Wide Residual Networks

23 May 2016Sergey ZagoruykoNikos Komodakis

Deep residual networks were shown to be able to scale up to thousands of layers and still have improving performance. However, each fraction of a percent of improved accuracy costs nearly doubling the number of layers, and so training very deep residual networks has a problem of diminishing feature reuse, which makes these networks very slow to train... (read more)

PDF Abstract

Results from the Paper


TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK RESULT LEADERBOARD
Image Classification CIFAR-10 Wide ResNet Percentage correct 96.11 # 33
Percentage error 3.89 # 22
Image Classification CIFAR-100 Wide ResNet Percentage correct 81.15 # 27
Percentage error 18.85 # 17
Image Classification ImageNet WRN-50-2-bottleneck Top 1 Accuracy 78.1% # 92
Top 5 Accuracy 93.97% # 64
Number of params 68.9M # 2
Image Classification SVHN Wide Residual Networks Percentage error 1.54 # 8
Image Classification SVHN Wide ResNet Percentage error 1.7 # 12