Search Results for author: Franck Iutzeler

Found 15 papers, 2 papers with code

The rate of convergence of Bregman proximal methods: Local geometry vs. regularity vs. sharpness

no code implementations15 Nov 2022 Waïss Azizian, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos

For generality, we focus on local solutions of constrained, non-monotone variational inequalities, and we show that the convergence rate of a given method depends sharply on its associated Legendre exponent, a notion that measures the growth rate of the underlying Bregman function (Euclidean, entropic, or other) near a solution.

Push--Pull with Device Sampling

no code implementations8 Jun 2022 Yu-Guan Hsieh, Yassine Laguel, Franck Iutzeler, Jérôme Malick

We consider decentralized optimization problems in which a number of agents collaborate to minimize the average of their local functions by exchanging over an underlying communication graph.

Learning over No-Preferred and Preferred Sequence of Items for Robust Recommendation (Extended Abstract)

1 code implementation26 Feb 2022 Aleksandra Burashnikova, Yury Maximov, Marianne Clausel, Charlotte Laclau, Franck Iutzeler, Massih-Reza Amini

This paper is an extended version of [Burashnikova et al., 2021, arXiv: 2012. 06910], where we proposed a theoretically supported sequential strategy for training a large-scale Recommender System (RS) over implicit feedback, mainly in the form of clicks.

Recommendation Systems

The Last-Iterate Convergence Rate of Optimistic Mirror Descent in Stochastic Variational Inequalities

no code implementations5 Jul 2021 Waïss Azizian, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos

In this paper, we analyze the local convergence rate of optimistic mirror descent methods in stochastic variational inequalities, a class of optimization problems with important applications to learning theory and machine learning.

Learning Theory Relation

Optimization in Open Networks via Dual Averaging

no code implementations27 May 2021 Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos

In networks of autonomous agents (e. g., fleets of vehicles, scattered sensors), the problem of minimizing the sum of the agents' local functions has received a lot of interest.

Distributed Optimization

Multi-Agent Online Optimization with Delays: Asynchronicity, Adaptivity, and Optimism

no code implementations21 Dec 2020 Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos

In this paper, we provide a general framework for studying multi-agent online learning problems in the presence of delays and asynchronicities.

Nonsmoothness in Machine Learning: specific structure, proximal identification, and applications

no code implementations2 Oct 2020 Franck Iutzeler, Jérôme Malick

Nonsmoothness is often a curse for optimization; but it is sometimes a blessing, in particular for applications in machine learning.

BIG-bench Machine Learning Dimensionality Reduction

Rank-one partitioning: formalization, illustrative examples, and a new cluster enhancing strategy

no code implementations1 Sep 2020 Charlotte Laclau, Franck Iutzeler, Ievgen Redko

In this paper, we introduce and formalize a rank-one partitioning learning paradigm that unifies partitioning methods that proceed by summarizing a data set using a single vector that is further used to derive the final clustering partition.

Clustering Denoising

Explore Aggressively, Update Conservatively: Stochastic Extragradient Methods with Variable Stepsize Scaling

no code implementations NeurIPS 2020 Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos

Owing to their stability and convergence speed, extragradient methods have become a staple for solving large-scale saddle-point problems in machine learning.

On the convergence of single-call stochastic extra-gradient methods

no code implementations NeurIPS 2019 Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos

Variational inequalities have recently attracted considerable interest in machine learning as a flexible paradigm for models that go beyond ordinary loss function minimization (such as generative adversarial networks and related deep learning systems).

A Distributed Flexible Delay-tolerant Proximal Gradient Algorithm

no code implementations25 Jun 2018 Konstantin Mishchenko, Franck Iutzeler, Jérôme Malick

We develop and analyze an asynchronous algorithm for distributed convex optimization when the objective writes a sum of smooth functions, local to each worker, and a non-smooth function.

On the Proximal Gradient Algorithm with Alternated Inertia

no code implementations17 Jan 2018 Franck Iutzeler, Jerome Malick

In this paper, we investigate the attractive properties of the proximal gradient algorithm with inertia.

An Asynchronous Distributed Framework for Large-scale Learning Based on Parameter Exchanges

no code implementations22 May 2017 Bikash Joshi, Franck Iutzeler, Massih-Reza Amini

In many distributed learning problems, the heterogeneous loading of computing machines may harm the overall performance of synchronous strategies.

Binary Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.