Search Results for author: Laurent Condat

Found 18 papers, 5 papers with code

From Local SGD to Local Fixed Point Methods for Federated Learning

no code implementations ICML 2020 Grigory Malinovsky, Dmitry Kovalev, Elnur Gasanov, Laurent Condat, Peter Richtarik

Most algorithms for solving optimization problems or finding saddle points of convex-concave functions are fixed point algorithms.

Federated Learning

FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Models

no code implementations14 Mar 2024 Kai Yi, Georg Meinhardt, Laurent Condat, Peter Richtárik

Federated Learning (FL) has garnered increasing attention due to its unique characteristic of allowing heterogeneous clients to process their private data locally and interact with a central server, while being respectful of privacy.

Federated Learning Quantization

LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression

no code implementations7 Mar 2024 Laurent Condat, Artavazd Maranjyan, Peter Richtárik

In Distributed optimization and Learning, and even more in the modern framework of federated learning, communication, which is slow and costly, is critical.

Distributed Optimization Federated Learning +1

Revisiting Decentralized ProxSkip: Achieving Linear Speedup

no code implementations12 Oct 2023 Luyao Guo, Sulaiman A. Alghunaim, Kun Yuan, Laurent Condat, Jinde Cao

We demonstrate that the leading communication complexity of ProxSkip is $\mathcal{O}\left(\frac{p\sigma^2}{n\epsilon^2}\right)$ for non-convex and convex settings, and $\mathcal{O}\left(\frac{p\sigma^2}{n\epsilon}\right)$ for the strongly convex setting, where $n$ represents the number of nodes, $p$ denotes the probability of communication, $\sigma^2$ signifies the level of stochastic noise, and $\epsilon$ denotes the desired accuracy level.

Distributed Optimization Federated Learning

Explicit Personalization and Local Training: Double Communication Acceleration in Federated Learning

1 code implementation22 May 2023 Kai Yi, Laurent Condat, Peter Richtárik

Federated Learning is an evolving machine learning paradigm, in which multiple clients perform computations based on their individual private data, interspersed by communication with a remote server.

Federated Learning

Joint demosaicing and fusion of multiresolution coded acquisitions: A unified image formation and reconstruction method

1 code implementation3 Sep 2022 Daniele Picone, Mauro Dalla Mura, Laurent Condat

Novel optical imaging devices allow for hybrid acquisition modalities such as compressed acquisitions with locally different spatial and spectral resolutions captured by a single focal plane array.

Demosaicking Image Reconstruction

EF-BV: A Unified Theory of Error Feedback and Variance Reduction Mechanisms for Biased and Unbiased Compression in Distributed Optimization

1 code implementation9 May 2022 Laurent Condat, Kai Yi, Peter Richtárik

Our general approach works with a new, larger class of compressors, which has two parameters, the bias and the variance, and includes unbiased and biased compressors as particular cases.

Distributed Optimization

Tikhonov Regularization of Circle-Valued Signals

no code implementations5 Aug 2021 Laurent Condat

It is common to have to process signals or images whose values are cyclic and can be represented as points on the complex circle, like wrapped phases, angles, orientations, or color hues.

MURANA: A Generic Framework for Stochastic Variance-Reduced Optimization

no code implementations6 Jun 2021 Laurent Condat, Peter Richtárik

We propose a generic variance-reduced algorithm, which we call MUltiple RANdomized Algorithm (MURANA), for minimizing a sum of several smooth functions plus a regularizer, in a sequential or distributed manner.

An Optimal Algorithm for Strongly Convex Minimization under Affine Constraints

no code implementations22 Feb 2021 Adil Salim, Laurent Condat, Dmitry Kovalev, Peter Richtárik

Optimization problems under affine constraints appear in various areas of machine learning.

Optimization and Control

Optimal Gradient Compression for Distributed and Federated Learning

no code implementations7 Oct 2020 Alyazeed Albasyoni, Mher Safaryan, Laurent Condat, Peter Richtárik

In the average-case analysis, we design a simple compression operator, Spherical Compression, which naturally achieves the lower bound.

Federated Learning Quantization

Distributed Proximal Splitting Algorithms with Rates and Acceleration

no code implementations2 Oct 2020 Laurent Condat, Grigory Malinovsky, Peter Richtárik

We analyze several generic proximal splitting algorithms well suited for large-scale convex nonsmooth optimization.

From Local SGD to Local Fixed-Point Methods for Federated Learning

no code implementations3 Apr 2020 Grigory Malinovsky, Dmitry Kovalev, Elnur Gasanov, Laurent Condat, Peter Richtárik

Most algorithms for solving optimization problems or finding saddle points of convex-concave functions are fixed-point algorithms.

Federated Learning

Dualize, Split, Randomize: Toward Fast Nonsmooth Optimization Algorithms

no code implementations3 Apr 2020 Adil Salim, Laurent Condat, Konstantin Mishchenko, Peter Richtárik

We consider minimizing the sum of three convex functions, where the first one F is smooth, the second one is nonsmooth and proximable and the third one is the composition of a nonsmooth proximable function with a linear operator L. This template problem has many applications, for instance, in image processing and machine learning.

On-the-fly Approximation of Multivariate Total Variation Minimization

no code implementations22 Apr 2015 Jordan Frecon, Nelly Pustelnik, Patrice Abry, Laurent Condat

In the context of change-point detection, addressed by Total Variation minimization strategies, an efficient on-the-fly algorithm has been designed leading to exact solutions for univariate data.

Change Point Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.