Search Results for author: Jérôme Malick

Found 15 papers, 4 papers with code

Universal Generalization Guarantees for Wasserstein Distributionally Robust Models

no code implementations19 Feb 2024 Tam Le, Jérôme Malick

Distributionally robust optimization has emerged as an attractive way to train robust machine learning models, capturing data uncertainty and distribution shifts.

The rate of convergence of Bregman proximal methods: Local geometry vs. regularity vs. sharpness

no code implementations15 Nov 2022 Waïss Azizian, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos

For generality, we focus on local solutions of constrained, non-monotone variational inequalities, and we show that the convergence rate of a given method depends sharply on its associated Legendre exponent, a notion that measures the growth rate of the underlying Bregman function (Euclidean, entropic, or other) near a solution.

Push--Pull with Device Sampling

no code implementations8 Jun 2022 Yu-Guan Hsieh, Yassine Laguel, Franck Iutzeler, Jérôme Malick

We consider decentralized optimization problems in which a number of agents collaborate to minimize the average of their local functions by exchanging over an underlying communication graph.

Federated Learning with Superquantile Aggregation for Heterogeneous Data

1 code implementation17 Dec 2021 Krishna Pillutla, Yassine Laguel, Jérôme Malick, Zaid Harchaoui

We present a federated learning framework that is designed to robustly deliver good predictive performance across individual clients with heterogeneous data.

Federated Learning

The Last-Iterate Convergence Rate of Optimistic Mirror Descent in Stochastic Variational Inequalities

no code implementations5 Jul 2021 Waïss Azizian, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos

In this paper, we analyze the local convergence rate of optimistic mirror descent methods in stochastic variational inequalities, a class of optimization problems with important applications to learning theory and machine learning.

Learning Theory Relation

Optimization in Open Networks via Dual Averaging

no code implementations27 May 2021 Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos

In networks of autonomous agents (e. g., fleets of vehicles, scattered sensors), the problem of minimizing the sum of the agents' local functions has received a lot of interest.

Distributed Optimization

Multi-Agent Online Optimization with Delays: Asynchronicity, Adaptivity, and Optimism

no code implementations21 Dec 2020 Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos

In this paper, we provide a general framework for studying multi-agent online learning problems in the presence of delays and asynchronicities.

Nonsmoothness in Machine Learning: specific structure, proximal identification, and applications

no code implementations2 Oct 2020 Franck Iutzeler, Jérôme Malick

Nonsmoothness is often a curse for optimization; but it is sometimes a blessing, in particular for applications in machine learning.

BIG-bench Machine Learning Dimensionality Reduction

First-order Optimization for Superquantile-based Supervised Learning

1 code implementation30 Sep 2020 Yassine Laguel, Jérôme Malick, Zaid Harchaoui

Classical supervised learning via empirical risk (or negative log-likelihood) minimization hinges upon the assumption that the testing distribution coincides with the training distribution.

BIG-bench Machine Learning regression

Explore Aggressively, Update Conservatively: Stochastic Extragradient Methods with Variable Stepsize Scaling

no code implementations NeurIPS 2020 Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos

Owing to their stability and convergence speed, extragradient methods have become a staple for solving large-scale saddle-point problems in machine learning.

Device Heterogeneity in Federated Learning: A Superquantile Approach

1 code implementation arXiv preprint 2020 Yassine Laguel, Krishna Pillutla, Jérôme Malick, Zaid Harchaoui

We propose a federated learning framework to handle heterogeneous client devices which do not conform to the population data distribution.

Federated Learning

On the convergence of single-call stochastic extra-gradient methods

no code implementations NeurIPS 2019 Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos

Variational inequalities have recently attracted considerable interest in machine learning as a flexible paradigm for models that go beyond ordinary loss function minimization (such as generative adversarial networks and related deep learning systems).

A Distributed Flexible Delay-tolerant Proximal Gradient Algorithm

no code implementations25 Jun 2018 Konstantin Mishchenko, Franck Iutzeler, Jérôme Malick

We develop and analyze an asynchronous algorithm for distributed convex optimization when the objective writes a sum of smooth functions, local to each worker, and a non-smooth function.

Sensitivity Analysis for Mirror-Stratifiable Convex Functions

1 code implementation11 Jul 2017 Jalal Fadili, Jérôme Malick, Gabriel Peyré

This pairing is crucial to track the strata that are identifiable by solutions of parametrized optimization problems or by iterates of optimization algorithms.

Cannot find the paper you are looking for? You can Submit a new open access paper.