Averaging Weights Leads to Wider Optima and Better Generalization

Deep neural networks are typically trained by optimizing a loss function with an SGD variant, in conjunction with a decaying learning rate, until convergence. We show that simple averaging of multiple points along the trajectory of SGD, with a cyclical or constant learning rate, leads to better generalization than conventional training. We also show that this Stochastic Weight Averaging (SWA) procedure finds much flatter solutions than SGD, and approximates the recent Fast Geometric Ensembling (FGE) approach with a single model. Using SWA we achieve notable improvement in test accuracy over conventional SGD training on a range of state-of-the-art residual networks, PyramidNets, DenseNets, and Shake-Shake networks on CIFAR-10, CIFAR-100, and ImageNet. In short, SWA is extremely easy to implement, improves generalization, and has almost no computational overhead.

PDF Abstract

Results from the Paper


Ranked #78 on Image Classification on CIFAR-100 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Image Classification CIFAR-10 ShakeShake-2x64d + SWA Percentage correct 97.12 # 83
Image Classification CIFAR-10 WRN-28-10 + SWA Percentage correct 96.79 # 92
Image Classification CIFAR-100 PyramidNet-272 + SWA Percentage correct 84.16 # 78
Image Classification CIFAR-100 WRN+SWA Percentage correct 82.15 # 108
Image Classification ImageNet ResNet-152 + SWA Top 1 Accuracy 78.94% # 734
Image Classification ImageNet DenseNet-161 + SWA Top 1 Accuracy 78.44% # 766

Methods