Search Results for author: Olivier Fercoq

Found 23 papers, 9 papers with code

Solving stochastic weak Minty variational inequalities without increasing batch size

1 code implementation17 Feb 2023 Thomas Pethick, Olivier Fercoq, Puya Latafat, Panagiotis Patrinos, Volkan Cevher

This paper introduces a family of stochastic extragradient-type algorithms for a class of nonconvex-nonconcave problems characterized by the weak Minty variational inequality (MVI).

Screening Rules and its Complexity for Active Set Identification

no code implementations6 Sep 2020 Eugene Ndiaye, Olivier Fercoq, Joseph Salmon

Screening rules were recently introduced as a technique for explicitly identifying active structures such as sparsity, in optimization problem arising in machine learning.

BIG-bench Machine Learning Dimensionality Reduction

Random extrapolation for primal-dual coordinate descent

no code implementations ICML 2020 Ahmet Alacaoglu, Olivier Fercoq, Volkan Cevher

We introduce a randomly extrapolated primal-dual coordinate descent method that adapts to sparsity of the data matrix and the favorable structures of the objective function.

Improved Optimistic Algorithms for Logistic Bandits

no code implementations ICML 2020 Louis Faury, Marc Abeille, Clément Calauzènes, Olivier Fercoq

For logistic bandits, the frequentist regret guarantees of existing algorithms are $\tilde{\mathcal{O}}(\kappa \sqrt{T})$, where $\kappa$ is a problem-dependent constant.

Improving Evolutionary Strategies with Generative Neural Networks

no code implementations31 Jan 2019 Louis Faury, Clement Calauzenes, Olivier Fercoq, Syrine Krichen

Evolutionary Strategies (ES) are a popular family of black-box zeroth-order optimization algorithms which rely on search distributions to efficiently optimize a large variety of objective functions.

Stochastic Frank-Wolfe for Composite Convex Minimization

1 code implementation NeurIPS 2019 Francesco Locatello, Alp Yurtsever, Olivier Fercoq, Volkan Cevher

A broad class of convex optimization problems can be formulated as a semidefinite program (SDP), minimization of a convex function over the positive-semidefinite cone subject to some affine constraints.

Stochastic Optimization

Safe Grid Search with Optimal Complexity

1 code implementation12 Oct 2018 Eugene Ndiaye, Tam Le, Olivier Fercoq, Joseph Salmon, Ichiro Takeuchi

Popular machine learning estimators involve regularization parameters that can be challenging to tune, and standard strategies rely on grid search for this task.

Neural Generative Models for Global Optimization with Gradients

no code implementations22 May 2018 Louis Faury, Flavian vasile, Clément Calauzènes, Olivier Fercoq

The aim of global optimization is to find the global optimum of arbitrary classes of functions, possibly highly multimodal ones.

Bayesian Optimization Gaussian Processes

Smooth Primal-Dual Coordinate Descent Algorithms for Nonsmooth Convex Optimization

no code implementations NeurIPS 2017 Ahmet Alacaoglu, Quoc Tran-Dinh, Olivier Fercoq, Volkan Cevher

We propose a new randomized coordinate descent method for a convex optimization template with broad applications.

Joint quantile regression in vector-valued RKHSs

no code implementations NeurIPS 2016 Maxime Sangnier, Olivier Fercoq, Florence d'Alché-Buc

Addressing the will to give a more complete picture than an average relationship provided by standard regression, a novel framework for estimating and predicting simultaneously several conditional quantiles is introduced.

Multi-Task Learning regression

GAP Safe Screening Rules for Sparse-Group Lasso

1 code implementation NeurIPS 2016 Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

For statistical learning in high dimension, sparse regularizations have proven useful to boost both computational and statistical efficiency.

Gap Safe screening rules for sparsity enforcing penalties

1 code implementation17 Nov 2016 Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

In high dimensional regression settings, sparsity enforcing penalties have proved useful to regularize the data-fitting term.

regression

Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression

2 code implementations8 Jun 2016 Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Vincent Leclère, Joseph Salmon

In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance.

regression Uncertainty Quantification +1

GAP Safe Screening Rules for Sparse-Group-Lasso

1 code implementation19 Feb 2016 Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

We adapt to the case of Sparse-Group Lasso recent safe screening rules that discard early in the solver irrelevant features/groups.

GAP Safe screening rules for sparse multi-task and multi-class models

no code implementations NeurIPS 2015 Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

The GAP Safe rule can cope with any iterative solver and we illustrate its performance on coordinate descent for multi-task Lasso, binary and multinomial logistic regression, demonstrating significant speed ups on all tested datasets with respect to previous safe rules.

regression

Mind the duality gap: safer rules for the Lasso

no code implementations13 May 2015 Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

In this paper, we propose new versions of the so-called $\textit{safe rules}$ for the Lasso.

SDNA: Stochastic Dual Newton Ascent for Empirical Risk Minimization

no code implementations8 Feb 2015 Zheng Qu, Peter Richtárik, Martin Takáč, Olivier Fercoq

We propose a new algorithm for minimizing regularized empirical loss: Stochastic Dual Newton Ascent (SDNA).

Fast Distributed Coordinate Descent for Non-Strongly Convex Losses

no code implementations21 May 2014 Olivier Fercoq, Zheng Qu, Peter Richtárik, Martin Takáč

We propose an efficient distributed randomized coordinate descent method for minimizing regularized non-strongly convex loss functions.

Accelerated, Parallel and Proximal Coordinate Descent

no code implementations20 Dec 2013 Olivier Fercoq, Peter Richtárik

In the special case when the number of processors is equal to the number of coordinates, the method converges at the rate $2\bar{\omega}\bar{L} R^2/(k+1)^2 $, where $k$ is the iteration counter, $\bar{\omega}$ is an average degree of separability of the loss function, $\bar{L}$ is the average of Lipschitz constants associated with the coordinates and individual functions in the sum, and $R$ is the distance of the initial point from the minimizer.

Parallel coordinate descent for the Adaboost problem

no code implementations7 Oct 2013 Olivier Fercoq

We design a randomised parallel version of Adaboost based on previous studies on parallel coordinate descent.

Smooth minimization of nonsmooth functions with parallel coordinate descent methods

no code implementations23 Sep 2013 Olivier Fercoq, Peter Richtárik

We study the performance of a family of randomized parallel coordinate descent methods for minimizing the sum of a nonsmooth and separable convex functions.

Cannot find the paper you are looking for? You can Submit a new open access paper.