no code implementations • 24 Jun 2020 • Mahmoud Assran, Arda Aytekin, Hamid Feyzmahdavian, Mikael Johansson, Michael Rabbat
Motivated by large-scale optimization problems arising in the context of machine learning, there have been several advances in the study of asynchronous parallel and distributed optimization methods during the past decade.
no code implementations • 14 Jun 2020 • Burak Demirel, Arda Aytekin
We analyze the closed-loop control performance of a networked control system that consists of $N$ independent linear feedback control loops, sharing a communication network with $M$ channels ($M<N$).
no code implementations • 13 Mar 2020 • Sarit Khirirat, Sindri Magnússon, Arda Aytekin, Mikael Johansson
With the increasing scale of machine learning tasks, it has become essential to reduce the communication between computing nodes.
no code implementations • 10 Jan 2019 • Arda Aytekin, Mikael Johansson
The event-driven and elastic nature of serverless runtimes makes them a very efficient and cost-effective alternative for scaling up computations.
no code implementations • 8 Oct 2018 • Arda Aytekin, Martin Biel, Mikael Johansson
We present POLO --- a C++ library for large-scale parallel optimization research that emphasizes ease-of-use, flexibility and efficiency in algorithm design.
no code implementations • 18 Oct 2016 • Arda Aytekin, Hamid Reza Feyzmahdavian, Mikael Johansson
This paper presents an asynchronous incremental aggregated gradient algorithm and its implementation in a parameter server framework for solving regularized optimization problems.
no code implementations • 18 May 2015 • Hamid Reza Feyzmahdavian, Arda Aytekin, Mikael Johansson
Mini-batch optimization has proven to be a powerful paradigm for large-scale learning.