Search Results for author: François Bachoc

Found 20 papers, 7 papers with code

Variational autoencoder with weighted samples for high-dimensional non-parametric adaptive importance sampling

1 code implementation13 Oct 2023 Julien Demange-Chryst, François Bachoc, Jérôme Morio, Timothé Krauth

In this contribution, we suggest to use as the approximating model a distribution parameterised by a variational autoencoder.

Gaussian Processes on Distributions based on Regularized Optimal Transport

no code implementations12 Oct 2022 François Bachoc, Louis Béthune, Alberto Gonzalez-Sanz, Jean-Michel Loubes

We present a novel kernel over the space of probability measures based on the dual formulation of optimal regularized transport.

Gaussian Processes valid

Large-Sample Properties of Non-Stationary Source Separation for Gaussian Signals

no code implementations21 Sep 2022 François Bachoc, Christoph Muehlmann, Klaus Nordhausen, Joni Virta

Non-stationary source separation is a well-established branch of blind source separation with many different methods.

blind source separation

Regret Analysis of Dyadic Search

no code implementations2 Sep 2022 François Bachoc, Tommaso Cesari, Roberto Colomboni, Andrea Paudice

We analyze the cumulative regret of the Dyadic Search algorithm of Bachoc et al. [2022].

A Near-Optimal Algorithm for Univariate Zeroth-Order Budget Convex Optimization

no code implementations13 Aug 2022 François Bachoc, Tommaso Cesari, Roberto Colomboni, Andrea Paudice

This paper studies a natural generalization of the problem of minimizing a univariate convex function $f$ by querying its values sequentially.

Local Identifiability of Deep ReLU Neural Networks: the Theory

no code implementations15 Jun 2022 Joachim Bona-Pellissier, François Malgouyres, François Bachoc

Is a sample rich enough to determine, at least locally, the parameters of a neural network?

Parameter identifiability of a deep feedforward ReLU neural network

no code implementations24 Dec 2021 Joachim Bona-Pellissier, François Bachoc, François Malgouyres

The possibility for one to recover the parameters-weights and biases-of a neural network thanks to the knowledge of its function on a subset of the input space can be, depending on the situation, a curse or a blessing.

Instance-Dependent Bounds for Zeroth-order Lipschitz Optimization with Error Certificates

no code implementations NeurIPS 2021 François Bachoc, Tommaso R Cesari, Sébastien Gerchinovitz

We study the problem of zeroth-order (black-box) optimization of a Lipschitz function $f$ defined on a compact subset $\mathcal X$ of $\mathbb R^d$, with the additional constraint that algorithms must certify the accuracy of their recommendations.

The sample complexity of level set approximation

no code implementations26 Oct 2020 François Bachoc, Tommaso Cesari, Sébastien Gerchinovitz

We study the problem of approximating the level set of an unknown function by sequentially querying its values.

Rate of convergence for geometric inference based on the empirical Christoffel function

no code implementations31 Oct 2019 Mai Trang Vu, François Bachoc, Edouard Pauwels

We consider the problem of estimating the support of a measure from a finite, independent, sample.

Approximating Gaussian Process Emulators with Linear Inequality Constraints and Noisy Observations via MC and MCMC

no code implementations15 Jan 2019 Andrés F. López-Lopera, François Bachoc, Nicolas Durrande, Jérémy Rohmer, Déborah Idier, Olivier Roustant

Finally, on 2D and 5D coastal flooding applications, we show that more flexible and realistic GP implementations can be obtained by considering noise effects and by enforcing the (linear) inequality constraints.

Gaussian Processes

Explaining Machine Learning Models using Entropic Variable Projection

2 code implementations18 Oct 2018 François Bachoc, Fabrice Gamboa, Max Halford, Jean-Michel Loubes, Laurent Risser

In order to emphasize the impact of each input variable, this formalism uses an information theory framework that quantifies the influence of all input-output observations based on entropic projections.

BIG-bench Machine Learning

Maximum likelihood estimation for Gaussian processes under inequality constraints

1 code implementation10 Apr 2018 François Bachoc, Agnès Lagnoux, Andrés F. López-Lopera

We first show that the (unconstrained) maximum likelihood estimator has the same asymptotic distribution, unconditionally and conditionally, to the fact that the Gaussian process satisfies the inequality constraints.

Statistics Theory Probability Statistics Theory

Gaussian Processes indexed on the symmetric group: prediction and learning

no code implementations16 Mar 2018 François Bachoc, Baptiste Broto, Fabrice Gamboa, Jean-Michel Loubes

In the framework of the supervised learning of a real function defined on a space X , the so called Kriging method stands on a real Gaussian field defined on X.

Gaussian Processes

Finite-dimensional Gaussian approximation with linear inequality constraints

1 code implementation20 Oct 2017 Andrés F. López-Lopera, François Bachoc, Nicolas Durrande, Olivier Roustant

Introducing inequality constraints in Gaussian process (GP) models can lead to more realistic uncertainties in learning a great variety of real-world problems.

Uncertainty Quantification

A Gaussian Process Regression Model for Distribution Inputs

no code implementations31 Jan 2017 François Bachoc, Fabrice Gamboa, Jean-Michel Loubes, Nil Venet

We prove that the Gaussian processes indexed by distributions corresponding to these kernels can be efficiently forecast, opening new perspectives in Gaussian process modeling.

BIG-bench Machine Learning Gaussian Processes +1

A supermartingale approach to Gaussian process based sequential design of experiments

no code implementations3 Aug 2016 Julien Bect, François Bachoc, David Ginsbourger

Thisobservation enables us to establish generic consistency results for abroad class of SUR strategies.

Nested Kriging predictions for datasets with large number of observations

1 code implementation19 Jul 2016 Didier Rullière, Nicolas Durrande, François Bachoc, Clément Chevalier

This work falls within the context of predicting the value of a real function at some input locations given a limited number of observations of this function.

Cannot find the paper you are looking for? You can Submit a new open access paper.