Search Results for author: Alexandre d'Aspremont

Found 27 papers, 8 papers with code

Linear Bandits on Uniformly Convex Sets

no code implementations10 Mar 2021 Thomas Kerdreux, Christophe Roux, Alexandre d'Aspremont, Sebastian Pokutta

Linear bandit algorithms yield $\tilde{\mathcal{O}}(n\sqrt{T})$ pseudo-regret bounds on compact convex action sets $\mathcal{K}\subset\mathbb{R}^n$ and two types of structural assumptions lead to better pseudo-regret bounds.

Local and Global Uniform Convexity Conditions

no code implementations9 Feb 2021 Thomas Kerdreux, Alexandre d'Aspremont, Sebastian Pokutta

We review various characterizations of uniform convexity and smoothness on norm balls in finite-dimensional spaces and connect results stemming from the geometry of Banach spaces with \textit{scaling inequalities} used in analysing the convergence of optimization methods.

Learning Theory online learning

Acceleration Methods

1 code implementation23 Jan 2021 Alexandre d'Aspremont, Damien Scieur, Adrien Taylor

This monograph covers some recent advances in a range of acceleration techniques frequently used in convex optimization.

A Bregman Method for Structure Learning on Sparse Directed Acyclic Graphs

1 code implementation5 Nov 2020 Manon Romain, Alexandre d'Aspremont

We develop a Bregman proximal gradient method for structure learning on linear structural causal models.

Averaging Atmospheric Gas Concentration Data using Wasserstein Barycenters

no code implementations6 Oct 2020 Mathieu Barré, Clément Giron, Matthieu Mazzolini, Alexandre d'Aspremont

Hyperspectral satellite images report greenhouse gas concentrations worldwide on a daily basis.

A Trainable Optimal Transport Embedding for Feature Aggregation and its Relationship to Attention

1 code implementation ICLR 2021 Grégoire Mialon, Dexiong Chen, Alexandre d'Aspremont, Julien Mairal

We address the problem of learning on sets of features, motivated by the need of performing pooling operations in long biological sequences of varying sizes, with long-range dependencies, and possibly few labeled data.

FANOK: Knockoffs in Linear Time

1 code implementation15 Jun 2020 Armin Askari, Quentin Rebjock, Alexandre d'Aspremont, Laurent El Ghaoui

We describe a series of algorithms that efficiently implement Gaussian model-X knockoffs to control the false discovery rate on large scale feature selection problems.

feature selection

Global Convergence of Frank Wolfe on One Hidden Layer Networks

no code implementations6 Feb 2020 Alexandre d'Aspremont, Mert Pilanci

The classical Frank Wolfe algorithm then converges with rate $O(1/T)$ where $T$ is both the number of neurons and the number of calls to the oracle.

Complexity Guarantees for Polyak Steps with Momentum

1 code implementation3 Feb 2020 Mathieu Barré, Adrien Taylor, Alexandre d'Aspremont

In smooth strongly convex optimization, knowledge of the strong convexity parameter is critical for obtaining simple methods with accelerated rates.

Screening Data Points in Empirical Risk Minimization via Ellipsoidal Regions and Safe Loss Functions

1 code implementation5 Dec 2019 Grégoire Mialon, Alexandre d'Aspremont, Julien Mairal

We design simple screening tests to automatically discard data samples in empirical risk minimization without losing optimization guarantees.

Ranking and synchronization from pairwise measurements via SVD

no code implementations6 Jun 2019 Alexandre d'Aspremont, Mihai Cucuringu, Hemant Tyagi

Given a measurement graph $G= (V, E)$ and an unknown signal $r \in \mathbb{R}^n$, we investigate algorithms for recovering $r$ from pairwise measurements of the form $r_i - r_j$; $\{i, j\} \in E$.

Regularity as Regularization: Smooth and Strongly Convex Brenier Potentials in Optimal Transport

no code implementations26 May 2019 François-Pierre Paty, Alexandre d'Aspremont, Marco Cuturi

On the other hand, one of the greatest achievements of the OT literature in recent years lies in regularity theory: Caffarelli showed that the OT map between two well behaved measures is Lipschitz, or equivalently when considering 2-Wasserstein distances, that Brenier convex potentials (whose gradient yields an optimal map) are smooth.

Domain Adaptation

Naive Feature Selection: Sparsity in Naive Bayes

no code implementations23 May 2019 Armin Askari, Alexandre d'Aspremont, Laurent El Ghaoui

We propose a sparse version of naive Bayes, which can be used for feature selection.

feature selection

Overcomplete Independent Component Analysis via SDP

no code implementations24 Jan 2019 Anastasia Podosinnikova, Amelia Perry, Alexander Wein, Francis Bach, Alexandre d'Aspremont, David Sontag

Moreover, we conjecture that the proposed program recovers a mixing component at the rate k < p^2/4 and prove that a mixing component can be recovered with high probability when k < (2 - epsilon) p log p when the original components are sampled uniformly at random on the hyper sphere.

Reconstructing Latent Orderings by Spectral Clustering

1 code implementation18 Jul 2018 Antoine Recanati, Thomas Kerdreux, Alexandre d'Aspremont

We tackle the task of retrieving linear and circular orderings in a unifying framework, and show how a latent ordering on the data translates into a filamentary structure on the Laplacian embedding.

Data Structures and Algorithms Genomics

Nonlinear Acceleration of CNNs

1 code implementation1 Jun 2018 Damien Scieur, Edouard Oyallon, Alexandre d'Aspremont, Francis Bach

The Regularized Nonlinear Acceleration (RNA) algorithm is an acceleration method capable of improving the rate of convergence of many optimization schemes such as gradient descend, SAGA or SVRG.

Online Regularized Nonlinear Acceleration

no code implementations24 May 2018 Damien Scieur, Edouard Oyallon, Alexandre d'Aspremont, Francis Bach

Regularized nonlinear acceleration (RNA) estimates the minimum of a function by post-processing iterates from an algorithm such as the gradient method.

General Classification

Frank-Wolfe with Subsampling Oracle

no code implementations ICML 2018 Thomas Kerdreux, Fabian Pedregosa, Alexandre d'Aspremont

The first algorithm that we propose is a randomized variant of the original FW algorithm and achieves a $\mathcal{O}(1/t)$ sublinear convergence rate as in the deterministic counterpart.

Nonlinear Acceleration of Stochastic Algorithms

no code implementations NeurIPS 2017 Damien Scieur, Francis Bach, Alexandre d'Aspremont

Here, we study extrapolation methods in a stochastic setting, where the iterates are produced by either a simple or an accelerated stochastic gradient algorithm.

Integration Methods and Optimization Algorithms

no code implementations NeurIPS 2017 Damien Scieur, Vincent Roulet, Francis Bach, Alexandre d'Aspremont

We show that accelerated optimization methods can be seen as particular instances of multi-step integration schemes from numerical analysis, applied to the gradient flow equation.

Sharpness, Restart and Acceleration

no code implementations NeurIPS 2017 Vincent Roulet, Alexandre d'Aspremont

The {\L}ojasiewicz inequality shows that H\"olderian error bounds on the minimum of convex optimization problems hold almost generically.

Learning with Clustering Structure

no code implementations16 Jun 2015 Vincent Roulet, Fajwel Fogel, Alexandre d'Aspremont, Francis Bach

We study supervised learning problems using clustering constraints to impose structure on either features or samples, seeking to help both prediction and interpretation.

Text Classification

SerialRank: Spectral Ranking using Seriation

no code implementations NeurIPS 2014 Fajwel Fogel, Alexandre d'Aspremont, Milan Vojnovic

Intuitively, the algorithm assigns similar rankings to items that compare similarly with all others.

Spectral Ranking using Seriation

no code implementations20 Jun 2014 Fajwel Fogel, Alexandre d'Aspremont, Milan Vojnovic

We first show that this spectral seriation algorithm recovers the true ranking when all pairwise comparisons are observed and consistent with a total order.

Convex Relaxations for Permutation Problems

no code implementations NeurIPS 2013 Fajwel Fogel, Rodolphe Jenatton, Francis Bach, Alexandre d'Aspremont

Seriation seeks to reconstruct a linear order between variables using unsorted similarity information.

White Functionals for Anomaly Detection in Dynamical Systems

no code implementations NeurIPS 2009 Marco Cuturi, Jean-Philippe Vert, Alexandre d'Aspremont

The candidate functionals are estimated in a subset of a reproducing kernel Hilbert space associated with the set where the process takes values.

Anomaly Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.