no code implementations • 14 Sep 2023 • Pierre Gaillard, Sébastien Gerchinovitz, Étienne de Montbrun
We prove that GreedyBox achieves an optimal sample complexity for any function $f$, up to logarithmic factors.
no code implementations • 2 Aug 2023 • Étienne de Montbrun, Sébastien Gerchinovitz
We also prove an $f$-dependent lower bound showing that this algorithm has a near-optimal cost complexity.
no code implementations • 9 Jun 2022 • El Mehdi Achour, Armand Foucault, Sébastien Gerchinovitz, François Malgouyres
Given two sets $F$, $G$ of real-valued functions, we first prove a general lower bound on how well functions in $F$ can be approximated in $L^p(\mu)$ norm by functions in $G$, for any $p \geq 1$ and any probability measure $\mu$.
no code implementations • 28 Jul 2021 • El Mehdi Achour, François Malgouyres, Sébastien Gerchinovitz
We characterize, among all critical points, which are global minimizers, strict saddle points, and non-strict saddle points.
1 code implementation • NeurIPS 2021 • David Bertoin, Jérôme Bolte, Sébastien Gerchinovitz, Edouard Pauwels
In theory, the choice of ReLU(0) in [0, 1] for a neural network has a negligible influence both on backpropagation and training.
no code implementations • NeurIPS 2021 • David Bertoin, Jerome Bolte, Sébastien Gerchinovitz, Edouard Pauwels
In theory, the choice of ReLU(0) in [0, 1] for a neural network has a negligible influence both on backpropagation and training.
2 code implementations • 18 Mar 2021 • Hervé Delseny, Christophe Gabreau, Adrien Gauffriau, Bernard Beaudouin, Ludovic Ponsolle, Lucian Alecu, Hugues Bonnin, Brice Beltran, Didier Duchel, Jean-Brice Ginestet, Alexandre Hervieu, Ghilaine Martinez, Sylvain Pasquet, Kevin Delmas, Claire Pagetti, Jean-Marc Gabriel, Camille Chapdelaine, Sylvaine Picard, Mathieu Damour, Cyril Cappi, Laurent Gardès, Florence De Grancey, Eric Jenn, Baptiste Lefevre, Gregory Flandin, Sébastien Gerchinovitz, Franck Mamalet, Alexandre Albore
Machine Learning (ML) seems to be one of the most promising solution to automate partially or completely some of the complex tasks currently realized by humans, such as driving vehicles, recognizing voice, etc.
no code implementations • NeurIPS 2021 • François Bachoc, Tommaso R Cesari, Sébastien Gerchinovitz
We study the problem of zeroth-order (black-box) optimization of a Lipschitz function $f$ defined on a compact subset $\mathcal X$ of $\mathbb R^d$, with the additional constraint that algorithms must certify the accuracy of their recommendations.
no code implementations • 26 Oct 2020 • François Bachoc, Tommaso Cesari, Sébastien Gerchinovitz
We study the problem of approximating the level set of an unknown function by sequentially querying its values.
no code implementations • 5 Oct 2020 • Hédi Hadiji, Sébastien Gerchinovitz, Jean-Michel Loubes, Gilles Stoltz
We consider the bandit-based framework for diversity-preserving recommendations introduced by Celis et al. (2019), who approached it in the case of a polytope mainly by a reduction to the setting of linear bandits.
no code implementations • 6 Feb 2020 • Clément Bouttier, Tommaso Cesari, Mélanie Ducoffe, Sébastien Gerchinovitz
We consider the problem of maximizing a non-concave Lipschitz multivariate function over a compact domain by sequentially querying its (possibly perturbed) values.
no code implementations • 9 Jul 2018 • Grégoire Jauvion, Nicolas Grislain, Pascal Sielenou Dkengne, Aurélien Garivier, Sébastien Gerchinovitz
The SSP acts as an intermediary between an advertiser wanting to buy ad spaces and a web publisher wanting to sell its ad spaces, and needs to define a bidding strategy to be able to deliver to the advertisers as many ads as possible while spending as little as possible.
no code implementations • 29 May 2018 • Pierre Gaillard, Sébastien Gerchinovitz, Malo Huard, Gilles Stoltz
In the case of sequentially revealed features, we also derive an asymptotic regret bound of $d B^2 \ln T$ for any individual sequence of features and bounded observations.
no code implementations • 27 Feb 2017 • Nicolò Cesa-Bianchi, Pierre Gaillard, Claudio Gentile, Sébastien Gerchinovitz
We investigate contextual online learning with nonparametric (Lipschitz) comparison classes under different assumptions on losses and feedback information.
no code implementations • NeurIPS 2016 • Sébastien Gerchinovitz, Tor Lattimore
First, the existence of a single arm that is optimal in every round cannot improve the regret in the worst case.
no code implementations • 26 Feb 2015 • Pierre Gaillard, Sébastien Gerchinovitz
We consider the problem of online nonparametric regression with arbitrary deterministic sequences.
no code implementations • 20 May 2011 • Sébastien Gerchinovitz, Jia Yuan Yu
We first present regret bounds with optimal dependencies on $d$, $T$, and on the sizes $U$, $X$ and $Y$ of the $\ell^1$-ball, the input data and the observations.
no code implementations • 5 Jan 2011 • Sébastien Gerchinovitz
We consider the problem of online linear regression on arbitrary deterministic sequences when the ambient dimension d can be much larger than the number of time rounds T. We introduce the notion of sparsity regret bound, which is a deterministic online counterpart of recent risk bounds derived in the stochastic setting under a sparsity scenario.