Search Results for author: Sébastien Gerchinovitz

Found 18 papers, 2 papers with code

Adaptive approximation of monotone functions

no code implementations14 Sep 2023 Pierre Gaillard, Sébastien Gerchinovitz, Étienne de Montbrun

We prove that GreedyBox achieves an optimal sample complexity for any function $f$, up to logarithmic factors.

Numerical Integration

Certified Multi-Fidelity Zeroth-Order Optimization

no code implementations2 Aug 2023 Étienne de Montbrun, Sébastien Gerchinovitz

We also prove an $f$-dependent lower bound showing that this algorithm has a near-optimal cost complexity.

A general approximation lower bound in $L^p$ norm, with applications to feed-forward neural networks

no code implementations9 Jun 2022 El Mehdi Achour, Armand Foucault, Sébastien Gerchinovitz, François Malgouyres

Given two sets $F$, $G$ of real-valued functions, we first prove a general lower bound on how well functions in $F$ can be approximated in $L^p(\mu)$ norm by functions in $G$, for any $p \geq 1$ and any probability measure $\mu$.

Open-Ended Question Answering

The loss landscape of deep linear neural networks: a second-order analysis

no code implementations28 Jul 2021 El Mehdi Achour, François Malgouyres, Sébastien Gerchinovitz

We characterize, among all critical points, which are global minimizers, strict saddle points, and non-strict saddle points.

Numerical influence of ReLU'(0) on backpropagation

1 code implementation NeurIPS 2021 David Bertoin, Jérôme Bolte, Sébastien Gerchinovitz, Edouard Pauwels

In theory, the choice of ReLU(0) in [0, 1] for a neural network has a negligible influence both on backpropagation and training.

Numerical influence of ReLU’(0) on backpropagation

no code implementations NeurIPS 2021 David Bertoin, Jerome Bolte, Sébastien Gerchinovitz, Edouard Pauwels

In theory, the choice of ReLU(0) in [0, 1] for a neural network has a negligible influence both on backpropagation and training.

Instance-Dependent Bounds for Zeroth-order Lipschitz Optimization with Error Certificates

no code implementations NeurIPS 2021 François Bachoc, Tommaso R Cesari, Sébastien Gerchinovitz

We study the problem of zeroth-order (black-box) optimization of a Lipschitz function $f$ defined on a compact subset $\mathcal X$ of $\mathbb R^d$, with the additional constraint that algorithms must certify the accuracy of their recommendations.

The sample complexity of level set approximation

no code implementations26 Oct 2020 François Bachoc, Tommaso Cesari, Sébastien Gerchinovitz

We study the problem of approximating the level set of an unknown function by sequentially querying its values.

Diversity-Preserving K-Armed Bandits, Revisited

no code implementations5 Oct 2020 Hédi Hadiji, Sébastien Gerchinovitz, Jean-Michel Loubes, Gilles Stoltz

We consider the bandit-based framework for diversity-preserving recommendations introduced by Celis et al. (2019), who approached it in the case of a polytope mainly by a reduction to the setting of linear bandits.

Regret analysis of the Piyavskii-Shubert algorithm for global Lipschitz optimization

no code implementations6 Feb 2020 Clément Bouttier, Tommaso Cesari, Mélanie Ducoffe, Sébastien Gerchinovitz

We consider the problem of maximizing a non-concave Lipschitz multivariate function over a compact domain by sequentially querying its (possibly perturbed) values.

Optimization of a SSP's Header Bidding Strategy using Thompson Sampling

no code implementations9 Jul 2018 Grégoire Jauvion, Nicolas Grislain, Pascal Sielenou Dkengne, Aurélien Garivier, Sébastien Gerchinovitz

The SSP acts as an intermediary between an advertiser wanting to buy ad spaces and a web publisher wanting to sell its ad spaces, and needs to define a bidding strategy to be able to deliver to the advertisers as many ads as possible while spending as little as possible.

Thompson Sampling

Uniform regret bounds over $R^d$ for the sequential linear regression problem with the square loss

no code implementations29 May 2018 Pierre Gaillard, Sébastien Gerchinovitz, Malo Huard, Gilles Stoltz

In the case of sequentially revealed features, we also derive an asymptotic regret bound of $d B^2 \ln T$ for any individual sequence of features and bounded observations.

regression

Algorithmic Chaining and the Role of Partial Feedback in Online Nonparametric Learning

no code implementations27 Feb 2017 Nicolò Cesa-Bianchi, Pierre Gaillard, Claudio Gentile, Sébastien Gerchinovitz

We investigate contextual online learning with nonparametric (Lipschitz) comparison classes under different assumptions on losses and feedback information.

Refined Lower Bounds for Adversarial Bandits

no code implementations NeurIPS 2016 Sébastien Gerchinovitz, Tor Lattimore

First, the existence of a single arm that is optimal in every round cannot improve the regret in the worst case.

A Chaining Algorithm for Online Nonparametric Regression

no code implementations26 Feb 2015 Pierre Gaillard, Sébastien Gerchinovitz

We consider the problem of online nonparametric regression with arbitrary deterministic sequences.

Computational Efficiency regression

Adaptive and optimal online linear regression on $\ell^1$-balls

no code implementations20 May 2011 Sébastien Gerchinovitz, Jia Yuan Yu

We first present regret bounds with optimal dependencies on $d$, $T$, and on the sizes $U$, $X$ and $Y$ of the $\ell^1$-ball, the input data and the observations.

regression

Sparsity regret bounds for individual sequences in online linear regression

no code implementations5 Jan 2011 Sébastien Gerchinovitz

We consider the problem of online linear regression on arbitrary deterministic sequences when the ambient dimension d can be much larger than the number of time rounds T. We introduce the notion of sparsity regret bound, which is a deterministic online counterpart of recent risk bounds derived in the stochastic setting under a sparsity scenario.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.