MADA: Meta-Adaptive Optimizers through hyper-gradient Descent

Since Adam was introduced, several novel adaptive optimizers for deep learning have been proposed. These optimizers typically excel in some tasks but may not outperform Adam uniformly across all tasks. In this work, we introduce Meta-Adaptive Optimizers (MADA), a unified optimizer framework that can generalize several known optimizers and dynamically learn the most suitable one during training. The key idea in MADA is to parameterize the space of optimizers and search through it using hyper-gradient descent. We compare MADA to other popular optimizers empirically on vision and language tasks to train CNN, ResNet and GPT-2 models. Results suggest that MADA is robust against sub-optimally tuned hyper-parameters, and consistently outperforms Adam and other popular optimizers. We find that MADA gives $3\times$ the validation performance gain over Adam that other popular optimizers do on GPT-2 training. We also propose AVGrad, a modification of AMSGrad that replaces the maximum operator with averaging, that is suitable for hyper-gradient optimization framework. Finally, we provide a convergence analysis to show that interpolation of optimizers can improve their error bounds (up to constants), hinting at an advantage for meta-optimizers.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods