Mixing ADAM and SGD: a Combined Optimization Method

16 Nov 2020  ·  Nicola Landro, Ignazio Gallo, Riccardo La Grassa ·

Optimization methods (optimizers) get special attention for the efficient training of neural networks in the field of deep learning. In literature there are many papers that compare neural models trained with the use of different optimizers. Each paper demonstrates that for a particular problem an optimizer is better than the others but as the problem changes this type of result is no longer valid and we have to start from scratch. In our paper we propose to use the combination of two very different optimizers but when used simultaneously they can overcome the performances of the single optimizers in very different problems. We propose a new optimizer called MAS (Mixing ADAM and SGD) that integrates SGD and ADAM simultaneously by weighing the contributions of both through the assignment of constant weights. Rather than trying to improve SGD or ADAM we exploit both at the same time by taking the best of both. We have conducted several experiments on images and text document classification, using various CNNs, and we demonstrated by experiments that the proposed MAS optimizer produces better performance than the single SGD or ADAM optimizers. The source code and all the results of the experiments are available online at the following link https://gitlab.com/nicolalandro/multi\_optimizer

PDF Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Stochastic Optimization AG News Bert Accuracy (mean) 93.86 # 1
Accuracy (max) 93.99 # 1
Stochastic Optimization CIFAR-10 Resnet18 Accuracy (mean) 85.89 # 1
Accuracy (max) 86.85 # 1
Stochastic Optimization CIFAR-10 Resnet34 Accuracy (mean) 85.75 # 2
Accuracy (max) 86.14 # 2
Stochastic Optimization CIFAR-100 Resnet34 Accuracy (mean) 53.06 # 2
Accuracy (max) 54.5 # 2
Stochastic Optimization CIFAR-100 Resnet18 Accuracy (mean) 58.01 # 1
Accuracy (max) 58.48 # 1