no code implementations • 17 Feb 2022 • Xuyang Wu, Sindri Magnusson, Hamid Reza Feyzmahdavian, Mikael Johansson
In this paper, we show that it is possible to use learning rates that depend on the actual time-varying delays in the system.
no code implementations • 9 Sep 2021 • Hamid Reza Feyzmahdavian, Mikael Johansson
We introduce novel convergence results for asynchronous iterations that appear in the analysis of parallel and distributed optimization algorithms.
no code implementations • 18 Jun 2018 • Sarit Khirirat, Hamid Reza Feyzmahdavian, Mikael Johansson
Asynchronous computation and gradient compression have emerged as two key techniques for achieving scalability in distributed optimization for large-scale machine learning.
no code implementations • 18 Oct 2016 • Arda Aytekin, Hamid Reza Feyzmahdavian, Mikael Johansson
This paper presents an asynchronous incremental aggregated gradient algorithm and its implementation in a parameter server framework for solving regularized optimization problems.
no code implementations • 18 May 2015 • Hamid Reza Feyzmahdavian, Arda Aytekin, Mikael Johansson
Mini-batch optimization has proven to be a powerful paradigm for large-scale learning.