Search Results for author: Hamid Reza Feyzmahdavian

Found 5 papers, 0 papers with code

Delay-adaptive step-sizes for asynchronous learning

no code implementations17 Feb 2022 Xuyang Wu, Sindri Magnusson, Hamid Reza Feyzmahdavian, Mikael Johansson

In this paper, we show that it is possible to use learning rates that depend on the actual time-varying delays in the system.

Asynchronous Iterations in Optimization: New Sequence Results and Sharper Algorithmic Guarantees

no code implementations9 Sep 2021 Hamid Reza Feyzmahdavian, Mikael Johansson

We introduce novel convergence results for asynchronous iterations that appear in the analysis of parallel and distributed optimization algorithms.

Distributed Optimization

Distributed learning with compressed gradients

no code implementations18 Jun 2018 Sarit Khirirat, Hamid Reza Feyzmahdavian, Mikael Johansson

Asynchronous computation and gradient compression have emerged as two key techniques for achieving scalability in distributed optimization for large-scale machine learning.

BIG-bench Machine Learning Distributed Optimization

Analysis and Implementation of an Asynchronous Optimization Algorithm for the Parameter Server

no code implementations18 Oct 2016 Arda Aytekin, Hamid Reza Feyzmahdavian, Mikael Johansson

This paper presents an asynchronous incremental aggregated gradient algorithm and its implementation in a parameter server framework for solving regularized optimization problems.

Cannot find the paper you are looking for? You can Submit a new open access paper.