no code implementations • 26 Mar 2024 • Ali Beikmohammadi, Sarit Khirirat, Sindri Magnússon
Addressing this challenge, federated reinforcement learning (FedRL) has emerged, wherein agents collaboratively learn a single policy by aggregating local estimations.
no code implementations • 29 Feb 2024 • Ali Beikmohammadi, Sarit Khirirat, Sindri Magnússon
In this work, we establish non-asymptotic convergence bounds on distributed momentum methods under biased gradient estimation on both general non-convex and $\mu$-PL non-convex problems.
1 code implementation • 29 Feb 2024 • Ali Beikmohammadi, Sarit Khirirat, Sindri Magnússon
In this paper, we present a novel and unified framework for analyzing the convergence of federated learning algorithms without the need for data similarity conditions.
no code implementations • 26 Jan 2024 • Zahra Kharazian, Tony Lindgren, Sindri Magnússon, Olof Steinert, Oskar Andersson Reyna
The objective of releasing this dataset is to give a broad range of researchers the possibility of working with real-world data from an internationally well-known company and introduce a standard benchmark to the predictive maintenance field, fostering reproducible research.
no code implementations • 17 Mar 2023 • Ali Beikmohammadi, Sindri Magnússon
In this paper, we delve into the potential of NARS as a substitute for RL in solving sequence-based tasks.
1 code implementation • 28 Feb 2023 • Ali Beikmohammadi, Sindri Magnússon
This paper introduces a novel human-inspired framework to enhance RL algorithm sample efficiency.
no code implementations • NeurIPS 2021 • Xiaoyu Wang, Sindri Magnússon, Mikael Johansson
The convergence of stochastic gradient descent is highly dependent on the step-size, especially on non-convex problems such as neural network training.
no code implementations • 23 Mar 2020 • Rong Du, Sindri Magnússon, Carlo Fischione
To ensure communication efficiency, this article proposes to model the measurement compression at IoT nodes and the inference at the base station or cloud as a deep neural network (DNN).
no code implementations • 13 Mar 2020 • Sarit Khirirat, Sindri Magnússon, Arda Aytekin, Mikael Johansson
With the increasing scale of machine learning tasks, it has become essential to reduce the communication between computing nodes.
no code implementations • 23 Sep 2019 • Sarit Khirirat, Sindri Magnússon, Mikael Johansson
Several gradient compression techniques have been proposed to reduce the communication load at the price of a loss in solution accuracy.
no code implementations • 26 Feb 2019 • Sindri Magnússon, Hossein Shokri-Ghadikolaei, Na Li
The communication time of these algorithms follows a complex interplay between a) the algorithm's convergence properties, b) the compression scheme, and c) the transmission rate offered by the digital channel.