1 code implementation • ICLR 2022 • Thomas Pethick, Puya Latafat, Panagiotis Patrinos, Olivier Fercoq, Volkan Cevher
This paper introduces a new extragradient-type algorithm for a class of nonconvex-nonconcave minimax problems.
1 code implementation • 17 Feb 2023 • Thomas Pethick, Olivier Fercoq, Puya Latafat, Panagiotis Patrinos, Volkan Cevher
This paper introduces a family of stochastic extragradient-type algorithms for a class of nonconvex-nonconcave problems characterized by the weak Minty variational inequality (MVI).
no code implementations • 6 Sep 2020 • Eugene Ndiaye, Olivier Fercoq, Joseph Salmon
Screening rules were recently introduced as a technique for explicitly identifying active structures such as sparsity, in optimization problem arising in machine learning.
no code implementations • ICML 2020 • Ahmet Alacaoglu, Olivier Fercoq, Volkan Cevher
We introduce a randomly extrapolated primal-dual coordinate descent method that adapts to sparsity of the data matrix and the favorable structures of the objective function.
no code implementations • ICML 2020 • Louis Faury, Marc Abeille, Clément Calauzènes, Olivier Fercoq
For logistic bandits, the frequentist regret guarantees of existing algorithms are $\tilde{\mathcal{O}}(\kappa \sqrt{T})$, where $\kappa$ is a problem-dependent constant.
no code implementations • 31 Jan 2019 • Louis Faury, Clement Calauzenes, Olivier Fercoq, Syrine Krichen
Evolutionary Strategies (ES) are a popular family of black-box zeroth-order optimization algorithms which rely on search distributions to efficiently optimize a large variety of objective functions.
1 code implementation • NeurIPS 2019 • Francesco Locatello, Alp Yurtsever, Olivier Fercoq, Volkan Cevher
A broad class of convex optimization problems can be formulated as a semidefinite program (SDP), minimization of a convex function over the positive-semidefinite cone subject to some affine constraints.
1 code implementation • 12 Oct 2018 • Eugene Ndiaye, Tam Le, Olivier Fercoq, Joseph Salmon, Ichiro Takeuchi
Popular machine learning estimators involve regularization parameters that can be challenging to tune, and standard strategies rely on grid search for this task.
no code implementations • 22 May 2018 • Louis Faury, Flavian vasile, Clément Calauzènes, Olivier Fercoq
The aim of global optimization is to find the global optimum of arbitrary classes of functions, possibly highly multimodal ones.
no code implementations • NeurIPS 2017 • Ahmet Alacaoglu, Quoc Tran-Dinh, Olivier Fercoq, Volkan Cevher
We propose a new randomized coordinate descent method for a convex optimization template with broad applications.
1 code implementation • 27 May 2017 • Mathurin Massias, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon
Results on multimodal neuroimaging problems with M/EEG data are also reported.
no code implementations • NeurIPS 2016 • Maxime Sangnier, Olivier Fercoq, Florence d'Alché-Buc
Addressing the will to give a more complete picture than an average relationship provided by standard regression, a novel framework for estimating and predicting simultaneously several conditional quantiles is introduced.
1 code implementation • NeurIPS 2016 • Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon
For statistical learning in high dimension, sparse regularizations have proven useful to boost both computational and statistical efficiency.
1 code implementation • 17 Nov 2016 • Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon
In high dimensional regression settings, sparsity enforcing penalties have proved useful to regularize the data-fitting term.
2 code implementations • 8 Jun 2016 • Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Vincent Leclère, Joseph Salmon
In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance.
1 code implementation • 19 Feb 2016 • Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon
We adapt to the case of Sparse-Group Lasso recent safe screening rules that discard early in the solver irrelevant features/groups.
no code implementations • NeurIPS 2015 • Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon
The GAP Safe rule can cope with any iterative solver and we illustrate its performance on coordinate descent for multi-task Lasso, binary and multinomial logistic regression, demonstrating significant speed ups on all tested datasets with respect to previous safe rules.
no code implementations • 13 May 2015 • Olivier Fercoq, Alexandre Gramfort, Joseph Salmon
In this paper, we propose new versions of the so-called $\textit{safe rules}$ for the Lasso.
no code implementations • 8 Feb 2015 • Zheng Qu, Peter Richtárik, Martin Takáč, Olivier Fercoq
We propose a new algorithm for minimizing regularized empirical loss: Stochastic Dual Newton Ascent (SDNA).
no code implementations • 21 May 2014 • Olivier Fercoq, Zheng Qu, Peter Richtárik, Martin Takáč
We propose an efficient distributed randomized coordinate descent method for minimizing regularized non-strongly convex loss functions.
no code implementations • 20 Dec 2013 • Olivier Fercoq, Peter Richtárik
In the special case when the number of processors is equal to the number of coordinates, the method converges at the rate $2\bar{\omega}\bar{L} R^2/(k+1)^2 $, where $k$ is the iteration counter, $\bar{\omega}$ is an average degree of separability of the loss function, $\bar{L}$ is the average of Lipschitz constants associated with the coordinates and individual functions in the sum, and $R$ is the distance of the initial point from the minimizer.
no code implementations • 7 Oct 2013 • Olivier Fercoq
We design a randomised parallel version of Adaboost based on previous studies on parallel coordinate descent.
no code implementations • 23 Sep 2013 • Olivier Fercoq, Peter Richtárik
We study the performance of a family of randomized parallel coordinate descent methods for minimizing the sum of a nonsmooth and separable convex functions.