Search Results for author: Quentin Bertrand

Found 14 papers, 8 papers with code

Self-Consuming Generative Models with Curated Data Provably Optimize Human Preferences

no code implementations12 Jun 2024 Damien Ferbach, Quentin Bertrand, Avishek Joey Bose, Gauthier Gidel

We prove that, if the data is curated according to a reward model, then the expected reward of the iterative retraining procedure is maximized.

On the Stability of Iterative Retraining of Generative Models on their own Data

1 code implementation30 Sep 2023 Quentin Bertrand, Avishek Joey Bose, Alexandre Duplessis, Marco Jiralerspong, Gauthier Gidel

In this paper, we develop a framework to rigorously study the impact of training generative models on mixed datasets -- from classical training on real data to self-consuming generative models trained on purely synthetic data.

Omega: Optimistic EMA Gradients

1 code implementation13 Jun 2023 Juan Ramirez, Rohan Sukumaran, Quentin Bertrand, Gauthier Gidel

Stochastic min-max optimization has gained interest in the machine learning community with the advancements in GANs and adversarial training.

The Curse of Unrolling: Rate of Differentiating Through Optimization

no code implementations27 Sep 2022 Damien Scieur, Quentin Bertrand, Gauthier Gidel, Fabian Pedregosa

Computing the Jacobian of the solution of an optimization problem is a central problem in machine learning, with applications in hyperparameter optimization, meta-learning, optimization as a layer, and dataset distillation, to name a few.

Dataset Distillation Hyperparameter Optimization +2

On the Limitations of Elo: Real-World Games, are Transitive, not Additive

1 code implementation21 Jun 2022 Quentin Bertrand, Wojciech Marian Czarnecki, Gauthier Gidel

In this study, we investigate the challenge of identifying the strength of the transitive component in games.

Starcraft Starcraft II

Beyond L1: Faster and Better Sparse Models with skglm

2 code implementations16 Apr 2022 Quentin Bertrand, Quentin Klopfenstein, Pierre-Antoine Bannier, Gauthier Gidel, Mathurin Massias

We propose a new fast algorithm to estimate any sparse generalized linear model with convex or non-convex separable penalties.

Anderson acceleration of coordinate descent

no code implementations19 Nov 2020 Quentin Bertrand, Mathurin Massias

Acceleration of first order methods is mainly obtained via inertial techniques \`a la Nesterov, or via nonlinear extrapolation.

regression

Model identification and local linear convergence of coordinate descent

no code implementations22 Oct 2020 Quentin Klopfenstein, Quentin Bertrand, Alexandre Gramfort, Joseph Salmon, Samuel Vaiter

For composite nonsmooth optimization problems, Forward-Backward algorithm achieves model identification (e. g. support identification for the Lasso) after a finite number of iterations, provided the objective function is regular enough.

Support recovery and sup-norm convergence rates for sparse pivotal estimation

no code implementations15 Jan 2020 Mathurin Massias, Quentin Bertrand, Alexandre Gramfort, Joseph Salmon

In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.

regression

Anytime Exact Belief Propagation

no code implementations27 Jul 2017 Gabriel Azevedo Ferreira, Quentin Bertrand, Charles Maussion, Rodrigo de Salvo Braz

In this paper we present work in progress on an Anytime Exact Belief Propagation algorithm that is very similar to Belief Propagation but is exact even for graphical models with cycles, while exhibiting soft short-circuiting, amortized constant time complexity in the model size, and which can provide probabilistic proof trees.

Probabilistic Programming

Cannot find the paper you are looking for? You can Submit a new open access paper.