Search Results for author: Filip Hanzely

Found 19 papers, 3 papers with code

Personalized Federated Learning with Multiple Known Clusters

1 code implementation28 Apr 2022 Boxiang Lyu, Filip Hanzely, Mladen Kolar

We consider the problem of personalized federated learning when there are known cluster structures within users.

Personalized Federated Learning

Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization

no code implementations NeurIPS 2021 Mher Safaryan, Filip Hanzely, Peter Richtárik

In order to further alleviate the communication burden inherent in distributed optimization, we propose a novel communication sparsification strategy that can take full advantage of the smoothness matrices associated with local losses.

Distributed Optimization

Local SGD: Unified Theory and New Efficient Methods

no code implementations3 Nov 2020 Eduard Gorbunov, Filip Hanzely, Peter Richtárik

We present a unified framework for analyzing local SGD methods in the convex and strongly convex regimes for distributed/federated training of supervised machine learning models.

Federated Learning

Lower Bounds and Optimal Algorithms for Personalized Federated Learning

no code implementations NeurIPS 2021 Filip Hanzely, Slavomír Hanzely, Samuel Horváth, Peter Richtárik

Our first contribution is establishing the first lower bounds for this formulation, for both the communication complexity and the local oracle complexity.

Personalized Federated Learning

Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters

no code implementations26 Aug 2020 Filip Hanzely

With the increase of the volume of data and the size and complexity of the statistical models used to formulate these often ill-conditioned optimization tasks, there is a need for new efficient algorithms able to cope with these challenges.

BIG-bench Machine Learning

Stochastic Subspace Cubic Newton Method

no code implementations ICML 2020 Filip Hanzely, Nikita Doikov, Peter Richtárik, Yurii Nesterov

In this paper, we propose a new randomized second-order optimization algorithm---Stochastic Subspace Cubic Newton (SSCN)---for minimizing a high dimensional convex function $f$.

Second-order methods

Federated Learning of a Mixture of Global and Local Models

no code implementations10 Feb 2020 Filip Hanzely, Peter Richtárik

We propose a new optimization formulation for training federated learning models.

Federated Learning

Learning to Optimize via Dual space Preconditioning

no code implementations25 Sep 2019 Sélim Chraibi, Adil Salim, Samuel Horváth, Filip Hanzely, Peter Richtárik

Preconditioning an minimization algorithm improve its convergence and can lead to a minimizer in one iteration in some extreme cases.

A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent

no code implementations27 May 2019 Eduard Gorbunov, Filip Hanzely, Peter Richtárik

In this paper we introduce a unified analysis of a large family of variants of proximal stochastic gradient descent ({\tt SGD}) which so far have required different intuitions, convergence analyses, have different applications, and which have been developed separately in various communities.

Quantization

One Method to Rule Them All: Variance Reduction for Data, Parameters and Many New Methods

no code implementations27 May 2019 Filip Hanzely, Peter Richtárik

We propose a remarkably general variance-reduced method suitable for solving regularized empirical risk minimization problems with either a large number of training examples, or a large model dimension, or both.

Best Pair Formulation & Accelerated Scheme for Non-convex Principal Component Pursuit

no code implementations25 May 2019 Aritra Dutta, Filip Hanzely, Jingwei Liang, Peter Richtárik

The best pair problem aims to find a pair of points that minimize the distance between two disjoint sets.

99% of Distributed Optimization is a Waste of Time: The Issue and How to Fix it

no code implementations27 Jan 2019 Konstantin Mishchenko, Filip Hanzely, Peter Richtárik

We propose a fix based on a new update-sparsification method we develop in this work, which we suggest be used on top of existing methods.

Distributed Optimization

A Privacy Preserving Randomized Gossip Algorithm via Controlled Noise Insertion

no code implementations27 Jan 2019 Filip Hanzely, Jakub Konečný, Nicolas Loizou, Peter Richtárik, Dmitry Grishchenko

In this work we present a randomized gossip algorithm for solving the average consensus problem while at the same time protecting the information about the initial private values stored at the nodes.

Privacy Preserving

SEGA: Variance Reduction via Gradient Sketching

no code implementations NeurIPS 2018 Filip Hanzely, Konstantin Mishchenko, Peter Richtarik

In each iteration, SEGA updates the current estimate of the gradient through a sketch-and-project operation using the information provided by the latest sketch, and this is subsequently used to compute an unbiased estimate of the true gradient through a random relaxation procedure.

A Nonconvex Projection Method for Robust PCA

no code implementations21 May 2018 Aritra Dutta, Filip Hanzely, Peter Richtárik

Robust principal component analysis (RPCA) is a well-studied problem with the goal of decomposing a matrix into the sum of low-rank and sparse components.

Face Detection Shadow Removal

Privacy Preserving Randomized Gossip Algorithms

no code implementations23 Jun 2017 Filip Hanzely, Jakub Konečný, Nicolas Loizou, Peter Richtárik, Dmitry Grishchenko

In this work we present three different randomized gossip algorithms for solving the average consensus problem while at the same time protecting the information about the initial private values stored at the nodes.

Optimization and Control

Cannot find the paper you are looking for? You can Submit a new open access paper.