no code implementations • 12 Jun 2024 • Damien Ferbach, Quentin Bertrand, Avishek Joey Bose, Gauthier Gidel
We prove that, if the data is curated according to a reward model, then the expected reward of the iterative retraining procedure is maximized.
1 code implementation • 30 Sep 2023 • Quentin Bertrand, Avishek Joey Bose, Alexandre Duplessis, Marco Jiralerspong, Gauthier Gidel
In this paper, we develop a framework to rigorously study the impact of training generative models on mixed datasets -- from classical training on real data to self-consuming generative models trained on purely synthetic data.
1 code implementation • 13 Jun 2023 • Juan Ramirez, Rohan Sukumaran, Quentin Bertrand, Gauthier Gidel
Stochastic min-max optimization has gained interest in the machine learning community with the advancements in GANs and adversarial training.
1 code implementation • 26 Nov 2022 • Sébastien Lachapelle, Tristan Deleu, Divyat Mahajan, Ioannis Mitliagkas, Yoshua Bengio, Simon Lacoste-Julien, Quentin Bertrand
Although disentangled representations are often said to be beneficial for downstream tasks, current empirical and theoretical understanding is limited.
no code implementations • 27 Sep 2022 • Damien Scieur, Quentin Bertrand, Gauthier Gidel, Fabian Pedregosa
Computing the Jacobian of the solution of an optimization problem is a central problem in machine learning, with applications in hyperparameter optimization, meta-learning, optimization as a layer, and dataset distillation, to name a few.
1 code implementation • 21 Jun 2022 • Quentin Bertrand, Wojciech Marian Czarnecki, Gauthier Gidel
In this study, we investigate the challenge of identifying the strength of the transitive component in games.
2 code implementations • 16 Apr 2022 • Quentin Bertrand, Quentin Klopfenstein, Pierre-Antoine Bannier, Gauthier Gidel, Mathurin Massias
We propose a new fast algorithm to estimate any sparse generalized linear model with convex or non-convex separable penalties.
1 code implementation • 4 May 2021 • Quentin Bertrand, Quentin Klopfenstein, Mathurin Massias, Mathieu Blondel, Samuel Vaiter, Alexandre Gramfort, Joseph Salmon
Finding the optimal hyperparameters of a model can be cast as a bilevel optimization problem, typically solved using zero-order techniques.
no code implementations • 19 Nov 2020 • Quentin Bertrand, Mathurin Massias
Acceleration of first order methods is mainly obtained via inertial techniques \`a la Nesterov, or via nonlinear extrapolation.
no code implementations • 22 Oct 2020 • Quentin Klopfenstein, Quentin Bertrand, Alexandre Gramfort, Joseph Salmon, Samuel Vaiter
For composite nonsmooth optimization problems, Forward-Backward algorithm achieves model identification (e. g. support identification for the Lasso) after a finite number of iterations, provided the objective function is regular enough.
1 code implementation • ICML 2020 • Quentin Bertrand, Quentin Klopfenstein, Mathieu Blondel, Samuel Vaiter, Alexandre Gramfort, Joseph Salmon
Our approach scales to high-dimensional data by leveraging the sparsity of the solutions.
no code implementations • 15 Jan 2020 • Mathurin Massias, Quentin Bertrand, Alexandre Gramfort, Joseph Salmon
In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.
1 code implementation • NeurIPS 2019 • Quentin Bertrand, Mathurin Massias, Alexandre Gramfort, Joseph Salmon
Sparsity promoting norms are frequently used in high dimensional regression.
no code implementations • 27 Jul 2017 • Gabriel Azevedo Ferreira, Quentin Bertrand, Charles Maussion, Rodrigo de Salvo Braz
In this paper we present work in progress on an Anytime Exact Belief Propagation algorithm that is very similar to Belief Propagation but is exact even for graphical models with cycles, while exhibiting soft short-circuiting, amortized constant time complexity in the model size, and which can provide probabilistic proof trees.