2 code implementations • 16 Feb 2023 • Victor Picheny, Joel Berkeley, Henry B. Moss, Hrvoje Stojic, Uri Granta, Sebastian W. Ober, Artem Artemev, Khurram Ghani, Alexander Goodall, Andrei Paleyes, Sattar Vakili, Sergio Pascual-Diaz, Stratis Markou, Jixiang Qing, Nasrulloh R. B. S Loka, Ivo Couckuyt
We present Trieste, an open-source Python package for Bayesian optimization and active learning benefiting from the scalability and efficiency of TensorFlow.
no code implementations • 1 Feb 2023 • Sing-Yuan Yeh, Fu-Chieh Chang, Chang-Wei Yueh, Pei-Yuan Wu, Alberto Bernacchia, Sattar Vakili
To the best of our knowledge, this is the first result showing a finite sample complexity under such a general model.
no code implementations • 1 Feb 2023 • Sattar Vakili, Danyal Ahmed, Alberto Bernacchia, Ciara Pike-Burke
An abstraction of the problem can be formulated as a kernel based bandit problem (also known as Bayesian optimisation), where a learner aims at optimising a kernelized function through sequential noisy observations.
no code implementations • 16 Jul 2022 • Sudeep Salgia, Sattar Vakili, Qing Zhao
We study collaborative learning among distributed clients facilitated by a central server.
no code implementations • 31 May 2022 • Sudeep Salgia, Sattar Vakili, Qing Zhao
The non-asymptotic error bounds may be of broader interest as a tool to establish the relation between the smoothness of the activation functions in neural contextual bandits and the smoothness of the kernels in kernel bandits.
1 code implementation • 31 May 2022 • Clémence Réda, Sattar Vakili, Emilie Kaufmann
In this paper, we provide new lower bounds on the sample complexity of pure exploration and on the regret.
no code implementations • 8 Feb 2022 • Sattar Vakili, Jonathan Scarlett, Da-Shan Shiu, Alberto Bernacchia
Kernel-based models such as kernel ridge regression and Gaussian processes are ubiquitous in machine learning applications for regression and optimization.
no code implementations • 28 Oct 2021 • Sattar Vakili, Jonathan Scarlett, Tara Javidi
Confidence intervals are a crucial building block in the analysis of various online learning problems.
no code implementations • 13 Sep 2021 • Sattar Vakili, Michael Bromberg, Jezabel Garcia, Da-Shan Shiu, Alberto Bernacchia
As a byproduct of our results, we show the equivalence between the RKHS corresponding to the NT kernel and its counterpart corresponding to the Mat\'ern family of kernels, showing the NT kernels induce a very general class of models.
no code implementations • NeurIPS 2021 • Sattar Vakili, Nacime Bouziani, Sepehr Jalali, Alberto Bernacchia, Da-Shan Shiu
Consider the sequential optimization of a continuous, possibly non-convex, and expensive to evaluate objective function $f$.
1 code implementation • NeurIPS 2021 • Sudeep Salgia, Sattar Vakili, Qing Zhao
We consider sequential optimization of an unknown function in a reproducing kernel Hilbert space.
no code implementations • 15 Sep 2020 • Sattar Vakili, Kia Khezeli, Victor Picheny
For the Mat\'ern family of kernels, where the lower bounds on $\gamma_T$, and regret under the frequentist setting, are known, our results close a huge polynomial in $T$ gap between the upper and lower bounds (up to logarithmic in $T$ factors).
no code implementations • NeurIPS 2021 • Sattar Vakili, Henry Moss, Artem Artemev, Vincent Dutordoir, Victor Picheny
We provide theoretical guarantees and show that the drastic reduction in computational complexity of scalable TS can be enjoyed without loss in the regret performance over the standard TS.
no code implementations • ICML 2020 • Sudeep Salgia, Qing Zhao, Sattar Vakili
A framework based on iterative coordinate minimization (CM) is developed for stochastic convex optimization.
no code implementations • 9 Mar 2020 • Ayman Boustati, Sattar Vakili, James Hensman, ST John
Approximate inference in complex probabilistic models such as deep Gaussian processes requires the optimisation of doubly stochastic objective functions.
no code implementations • 12 Feb 2020 • Sattar Vakili
Kernel-based bandit is an extensively studied black-box optimization problem, in which the objective function is assumed to live in a known reproducing kernel Hilbert space.
no code implementations • 5 Dec 2019 • Victor Picheny, Sattar Vakili, Artem Artemev
Bayesian optimisation is a powerful tool to solve expensive black-box problems, but fails when the stationary assumption made on the objective function is strongly violated, which is the case in particular for ill-conditioned or discontinuous objectives.
no code implementations • 16 May 2019 • James A. Grant, Alexis Boukouvalas, Ryan-Rhys Griffiths, David S. Leslie, Sattar Vakili, Enrique Munoz de Cote
We consider the problem of adaptively placing sensors along an interval to detect stochastically-generated events.
no code implementations • 17 Jan 2019 • Sattar Vakili, Sudeep Salgia, Qing Zhao
Online minimization of an unknown convex function over the interval $[0, 1]$ is considered under first-order stochastic bandit feedback, which returns a random realization of the gradient of the function at each query point.
no code implementations • 24 Jul 2018 • Sattar Vakili, Alexis Boukouvalas, Qing Zhao
In this paper, a risk-averse online learning problem under the performance measure of the mean-variance of the rewards is studied.
no code implementations • 12 Feb 2018 • Xiao Xu, Sattar Vakili, Qing Zhao, Ananthram Swami
Two settings of complete and partial side information based on whether the UIG is fully revealed are studied and a general two-step learning structure consisting of an offline reduction of the action space and online aggregation of reward observations from similar arms is proposed to fully exploit the topological structure of the side information.
no code implementations • 11 Sep 2017 • Sattar Vakili, Qing Zhao, Chang Liu, Chen-Nee Chuah
We consider the problem of detecting a few targets among a large number of hierarchical data streams.
no code implementations • 18 Apr 2016 • Sattar Vakili, Qing Zhao
We show that the model-specific regret and the model-independent regret in terms of the mean-variance of the reward process are lower bounded by $\Omega(\log T)$ and $\Omega(T^{2/3})$, respectively.