1 code implementation • 12 Jul 2024 • Fajwel Fogel, Yohann Perron, Nikola Besic, Laurent Saint-André, Agnès Pellissier-Tanon, Martin Schwartz, Thomas Boudras, Ibrahim Fayad, Alexandre d'Aspremont, Loic Landrieu, Philippe Ciais
Estimating canopy height and its changes at meter resolution from satellite imagery is a significant challenge in computer vision with critical environmental applications.
no code implementations • 30 Jun 2023 • Clément Lezane, Cristóbal Guzmán, Alexandre d'Aspremont
For the $L$-smooth case with a feasible set bounded by $D$, we derive a convergence rate of $ O( {L^2 D^2}/{(T^{2}\sqrt{T})} + {(D_0^2+\sigma^2)}/{\sqrt{T}} )$, where $D_0$ is the starting distance to an optimal solution, and $ \sigma^2$ is the stochastic oracle variance.
no code implementations • 22 Apr 2023 • Ibrahim Fayad, Philippe Ciais, Martin Schwartz, Jean-Pierre Wigneron, Nicolas Baghdadi, Aurélien de Truchis, Alexandre d'Aspremont, Frederic Frappart, Sassan Saatchi, Agnes Pellissier-Tanon, Hassan Bazzi
This model achieves better accuracy than previously used convolutional based approaches (ConvNets) optimized with only a continuous loss function.
no code implementations • 17 Nov 2022 • Alexis Groshenry, Clement Giron, Thomas Lauvaux, Alexandre d'Aspremont, Thibaud Ehret
The new generation of hyperspectral imagers, such as PRISMA, has improved significantly our detection capability of methane (CH4) plumes from space at high spatial resolution (30m).
no code implementations • 3 Nov 2022 • Alexandre d'Aspremont, Cristóbal Guzmán, Clément Lezane
Inspired by regularization techniques in statistics and machine learning, we study complementary composite minimization in the stochastic setting.
no code implementations • 10 Mar 2021 • Thomas Kerdreux, Christophe Roux, Alexandre d'Aspremont, Sebastian Pokutta
Linear bandit algorithms yield $\tilde{\mathcal{O}}(n\sqrt{T})$ pseudo-regret bounds on compact convex action sets $\mathcal{K}\subset\mathbb{R}^n$ and two types of structural assumptions lead to better pseudo-regret bounds.
no code implementations • 9 Feb 2021 • Thomas Kerdreux, Alexandre d'Aspremont, Sebastian Pokutta
We review various characterizations of uniform convexity and smoothness on norm balls in finite-dimensional spaces and connect results stemming from the geometry of Banach spaces with \textit{scaling inequalities} used in analysing the convergence of optimization methods.
1 code implementation • 23 Jan 2021 • Alexandre d'Aspremont, Damien Scieur, Adrien Taylor
This monograph covers some recent advances in a range of acceleration techniques frequently used in convex optimization.
1 code implementation • 5 Nov 2020 • Manon Romain, Alexandre d'Aspremont
We develop a Bregman proximal gradient method for structure learning on linear structural causal models.
no code implementations • 6 Oct 2020 • Mathieu Barré, Clément Giron, Matthieu Mazzolini, Alexandre d'Aspremont
Hyperspectral satellite images report greenhouse gas concentrations worldwide on a daily basis.
1 code implementation • ICLR 2021 • Grégoire Mialon, Dexiong Chen, Alexandre d'Aspremont, Julien Mairal
We address the problem of learning on sets of features, motivated by the need of performing pooling operations in long biological sequences of varying sizes, with long-range dependencies, and possibly few labeled data.
1 code implementation • 15 Jun 2020 • Armin Askari, Quentin Rebjock, Alexandre d'Aspremont, Laurent El Ghaoui
We describe a series of algorithms that efficiently implement Gaussian model-X knockoffs to control the false discovery rate on large scale feature selection problems.
no code implementations • 6 Feb 2020 • Alexandre d'Aspremont, Mert Pilanci
The classical Frank Wolfe algorithm then converges with rate $O(1/T)$ where $T$ is both the number of neurons and the number of calls to the oracle.
1 code implementation • 3 Feb 2020 • Mathieu Barré, Adrien Taylor, Alexandre d'Aspremont
In smooth strongly convex optimization, knowledge of the strong convexity parameter is critical for obtaining simple methods with accelerated rates.
1 code implementation • 5 Dec 2019 • Grégoire Mialon, Alexandre d'Aspremont, Julien Mairal
We design simple screening tests to automatically discard data samples in empirical risk minimization without losing optimization guarantees.
no code implementations • 6 Jun 2019 • Alexandre d'Aspremont, Mihai Cucuringu, Hemant Tyagi
Given a measurement graph $G= (V, E)$ and an unknown signal $r \in \mathbb{R}^n$, we investigate algorithms for recovering $r$ from pairwise measurements of the form $r_i - r_j$; $\{i, j\} \in E$.
no code implementations • 26 May 2019 • François-Pierre Paty, Alexandre d'Aspremont, Marco Cuturi
On the other hand, one of the greatest achievements of the OT literature in recent years lies in regularity theory: Caffarelli showed that the OT map between two well behaved measures is Lipschitz, or equivalently when considering 2-Wasserstein distances, that Brenier convex potentials (whose gradient yields an optimal map) are smooth.
no code implementations • 23 May 2019 • Armin Askari, Alexandre d'Aspremont, Laurent El Ghaoui
We propose a sparse version of naive Bayes, which can be used for feature selection.
no code implementations • 24 Jan 2019 • Anastasia Podosinnikova, Amelia Perry, Alexander Wein, Francis Bach, Alexandre d'Aspremont, David Sontag
Moreover, we conjecture that the proposed program recovers a mixing component at the rate k < p^2/4 and prove that a mixing component can be recovered with high probability when k < (2 - epsilon) p log p when the original components are sampled uniformly at random on the hyper sphere.
1 code implementation • 18 Jul 2018 • Antoine Recanati, Thomas Kerdreux, Alexandre d'Aspremont
We tackle the task of retrieving linear and circular orderings in a unifying framework, and show how a latent ordering on the data translates into a filamentary structure on the Laplacian embedding.
Data Structures and Algorithms Genomics
1 code implementation • 1 Jun 2018 • Damien Scieur, Edouard Oyallon, Alexandre d'Aspremont, Francis Bach
The Regularized Nonlinear Acceleration (RNA) algorithm is an acceleration method capable of improving the rate of convergence of many optimization schemes such as gradient descend, SAGA or SVRG.
no code implementations • 24 May 2018 • Damien Scieur, Edouard Oyallon, Alexandre d'Aspremont, Francis Bach
Regularized nonlinear acceleration (RNA) estimates the minimum of a function by post-processing iterates from an algorithm such as the gradient method.
no code implementations • ICML 2018 • Thomas Kerdreux, Fabian Pedregosa, Alexandre d'Aspremont
The first algorithm that we propose is a randomized variant of the original FW algorithm and achieves a $\mathcal{O}(1/t)$ sublinear convergence rate as in the deterministic counterpart.
no code implementations • NeurIPS 2017 • Vincent Roulet, Alexandre d'Aspremont
The {\L}ojasiewicz inequality shows that H\"olderian error bounds on the minimum of convex optimization problems hold almost generically.
no code implementations • NeurIPS 2017 • Damien Scieur, Francis Bach, Alexandre d'Aspremont
Here, we study extrapolation methods in a stochastic setting, where the iterates are produced by either a simple or an accelerated stochastic gradient algorithm.
no code implementations • NeurIPS 2017 • Damien Scieur, Vincent Roulet, Francis Bach, Alexandre d'Aspremont
We show that accelerated optimization methods can be seen as particular instances of multi-step integration schemes from numerical analysis, applied to the gradient flow equation.
no code implementations • 16 Jun 2015 • Vincent Roulet, Fajwel Fogel, Alexandre d'Aspremont, Francis Bach
We study supervised learning problems using clustering constraints to impose structure on either features or samples, seeking to help both prediction and interpretation.
no code implementations • NeurIPS 2014 • Fajwel Fogel, Alexandre d'Aspremont, Milan Vojnovic
Intuitively, the algorithm assigns similar rankings to items that compare similarly with all others.
no code implementations • 20 Jun 2014 • Fajwel Fogel, Alexandre d'Aspremont, Milan Vojnovic
We first show that this spectral seriation algorithm recovers the true ranking when all pairwise comparisons are observed and consistent with a total order.
no code implementations • NeurIPS 2013 • Fajwel Fogel, Rodolphe Jenatton, Francis Bach, Alexandre d'Aspremont
Seriation seeks to reconstruct a linear order between variables using unsorted similarity information.
no code implementations • NeurIPS 2009 • Marco Cuturi, Jean-Philippe Vert, Alexandre d'Aspremont
The candidate functionals are estimated in a subset of a reproducing kernel Hilbert space associated with the set where the process takes values.
no code implementations • NeurIPS 2007 • Ronny Luss, Alexandre d'Aspremont
In this paper, we propose a method for support vector machine classification using indefinite kernels.