Search Results for author: Samuel Horváth

Found 25 papers, 5 papers with code

Enhancing Policy Gradient with the Polyak Step-Size Adaption

no code implementations11 Apr 2024 Yunxiang Li, Rui Yuan, Chen Fan, Mark Schmidt, Samuel Horváth, Robert M. Gower, Martin Takáč

Policy gradient is a widely utilized and foundational algorithm in the field of reinforcement learning (RL).

Reinforcement Learning (RL)

Generalized Policy Learning for Smart Grids: FL TRPO Approach

no code implementations27 Mar 2024 Yunxiang Li, Nicolas Mauricio Cuadrado, Samuel Horváth, Martin Takáč

The smart grid domain requires bolstering the capabilities of existing energy management systems; Federated Learning (FL) aligns with this goal as it demonstrates a remarkable ability to train models on heterogeneous datasets while maintaining data privacy, making it suitable for smart grid applications, which often involve disparate data distributions and interdependencies among features that hinder the suitability of linear models.

energy management Federated Learning +1

Flashback: Understanding and Mitigating Forgetting in Federated Learning

no code implementations8 Feb 2024 Mohammed Aljahdali, Ahmed M. Abdelmoniem, Marco Canini, Samuel Horváth

In Federated Learning (FL), forgetting, or the loss of knowledge across rounds, hampers algorithm convergence, particularly in the presence of severe data heterogeneity among clients.

Federated Learning

Federated Learning Can Find Friends That Are Beneficial

no code implementations7 Feb 2024 Nazarii Tupitsa, Samuel Horváth, Martin Takáč, Eduard Gorbunov

In Federated Learning (FL), the distributed nature and heterogeneity of client data present both opportunities and challenges.

Federated Learning

Dirichlet-based Uncertainty Quantification for Personalized Federated Learning with Improved Posterior Networks

no code implementations18 Dec 2023 Nikita Kotelevskii, Samuel Horváth, Karthik Nandakumar, Martin Takáč, Maxim Panov

This paper presents a new approach to federated learning that allows selecting a model from global and personalized ones that would perform better for a particular input point.

Personalized Federated Learning Uncertainty Quantification

Clip21: Error Feedback for Gradient Clipping

no code implementations30 May 2023 Sarit Khirirat, Eduard Gorbunov, Samuel Horváth, Rustem Islamov, Fakhri Karray, Peter Richtárik

Motivated by the increasing popularity and importance of large-scale training under differential privacy (DP) constraints, we study distributed gradient methods with gradient clipping, i. e., clipping applied to the gradients computed from local information at the nodes.

Global-QSGD: Practical Floatless Quantization for Distributed Learning with Theoretical Guarantees

no code implementations29 May 2023 Jihao Xin, Marco Canini, Peter Richtárik, Samuel Horváth

To obtain theoretical guarantees, we generalize the notion of standard unbiased compression operators to incorporate Global-QSGD.

Quantization

Partially Personalized Federated Learning: Breaking the Curse of Data Heterogeneity

no code implementations29 May 2023 Konstantin Mishchenko, Rustem Islamov, Eduard Gorbunov, Samuel Horváth

We present a partially personalized formulation of Federated Learning (FL) that strikes a balance between the flexibility of personalization and cooperativeness of global training.

Personalized Federated Learning

Federated Learning with Regularized Client Participation

no code implementations7 Feb 2023 Grigory Malinovsky, Samuel Horváth, Konstantin Burlachenko, Peter Richtárik

Under this scheme, each client joins the learning process every $R$ communication rounds, which we refer to as a meta epoch.

Federated Learning

Partial Disentanglement with Partially-Federated GANs (PaDPaF)

1 code implementation7 Dec 2022 Abdulla Jasem Almansoori, Samuel Horváth, Martin Takáč

Federated learning has become a popular machine learning paradigm with many potential real-life applications, including recommendation systems, the Internet of Things (IoT), healthcare, and self-driving cars.

Disentanglement Federated Learning +2

Adaptive Learning Rates for Faster Stochastic Gradient Methods

no code implementations10 Aug 2022 Samuel Horváth, Konstantin Mishchenko, Peter Richtárik

In this work, we propose new adaptive step size strategies that improve several stochastic gradient methods.

Stochastic Optimization

Better Methods and Theory for Federated Learning: Compression, Client Selection and Heterogeneity

no code implementations1 Jul 2022 Samuel Horváth

Federated learning (FL) is an emerging machine learning paradigm involving multiple clients, e. g., mobile phone devices, with an incentive to collaborate in solving a machine learning problem coordinated by a central server.

BIG-bench Machine Learning Federated Learning +1

Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top

1 code implementation1 Jun 2022 Eduard Gorbunov, Samuel Horváth, Peter Richtárik, Gauthier Gidel

However, many fruitful directions, such as the usage of variance reduction for achieving robustness and communication compression for reducing communication costs, remain weakly explored in the field.

Federated Learning

FedShuffle: Recipes for Better Use of Local Work in Federated Learning

no code implementations27 Apr 2022 Samuel Horváth, Maziar Sanjabi, Lin Xiao, Peter Richtárik, Michael Rabbat

The practice of applying several local updates before aggregation across clients has been empirically shown to be a successful approach to overcoming the communication bottleneck in Federated Learning (FL).

Federated Learning

FL_PyTorch: optimization research simulator for federated learning

2 code implementations7 Feb 2022 Konstantin Burlachenko, Samuel Horváth, Peter Richtárik

Our system supports abstractions that provide researchers with a sufficient level of flexibility to experiment with existing and novel approaches to advance the state-of-the-art.

Federated Learning

FLIX: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning

no code implementations22 Nov 2021 Elnur Gasanov, Ahmed Khaled, Samuel Horváth, Peter Richtárik

A persistent problem in federated learning is that it is not clear what the optimization objective should be: the standard average risk minimization of supervised learning is inadequate in handling several major constraints specific to federated learning, such as communication adaptivity and personalization control.

Distributed Optimization Federated Learning

Hyperparameter Transfer Learning with Adaptive Complexity

no code implementations25 Feb 2021 Samuel Horváth, Aaron Klein, Peter Richtárik, Cédric Archambeau

Bayesian optimization (BO) is a sample efficient approach to automatically tune the hyperparameters of machine learning models.

Bayesian Optimization Decision Making +1

Lower Bounds and Optimal Algorithms for Personalized Federated Learning

no code implementations NeurIPS 2021 Filip Hanzely, Slavomír Hanzely, Samuel Horváth, Peter Richtárik

Our first contribution is establishing the first lower bounds for this formulation, for both the communication complexity and the local oracle complexity.

Personalized Federated Learning

A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning

1 code implementation ICLR 2021 Samuel Horváth, Peter Richtárik

EF remains the only known technique that can deal with the error induced by contractive compressors which are not unbiased, such as Top-$K$.

Federated Learning Stochastic Optimization

On Biased Compression for Distributed Learning

no code implementations27 Feb 2020 Aleksandr Beznosikov, Samuel Horváth, Peter Richtárik, Mher Safaryan

In the last few years, various communication compression techniques have emerged as an indispensable tool helping to alleviate the communication bottleneck in distributed learning.

Learning to Optimize via Dual space Preconditioning

no code implementations25 Sep 2019 Sélim Chraibi, Adil Salim, Samuel Horváth, Filip Hanzely, Peter Richtárik

Preconditioning an minimization algorithm improve its convergence and can lead to a minimizer in one iteration in some extreme cases.

Cannot find the paper you are looking for? You can Submit a new open access paper.