no code implementations • 20 Jun 2024 • Qianli Shen, Yezhen Wang, Zhouhao Yang, Xiang Li, Haonan Wang, Yang Zhang, Jonathan Scarlett, Zhanxing Zhu, Kenji Kawaguchi

Bi-level optimization (BO) has become a fundamental mathematical framework for addressing hierarchical machine learning problems.

no code implementations • 5 Jun 2024 • Arpan Losalka, Jonathan Scarlett

We consider the problem of sequentially maximizing an unknown function $f$ over a set of actions of the form $(s,\mathbf{x})$, where the selected actions must satisfy a safety constraint with respect to an unknown safety function $g$.

no code implementations • 11 Jan 2024 • Xu Cai, Jonathan Scarlett

In this paper, we study the problem of estimating the normalizing constant $\int e^{-\lambda f(x)}dx$ through queries to the black-box function $f$, where $f$ belongs to a reproducing kernel Hilbert space (RKHS), and $\lambda$ is a problem parameter.

no code implementations • 8 Sep 2023 • Thach V. Bui, Jonathan Scarlett

In this paper, we introduce a variation of the group testing problem capturing the idea that a positive test requires a combination of multiple ``types'' of item.

no code implementations • 25 Apr 2023 • Prathamesh Mayekar, Jonathan Scarlett, Vincent Y. F. Tan

We study a distributed stochastic multi-armed bandit where a client supplies the learner with communication-constrained feedback based on the rewards for the corresponding arm pulls.

no code implementations • 10 Nov 2022 • Zihan Li, Jonathan Scarlett

We consider optimizing a function network in the noise-free grey-box setting with RKHS function classes, where the exact intermediate results are observable.

1 code implementation • 3 Nov 2022 • Arpan Losalka, Jonathan Scarlett

We consider the problem of sequentially maximising an unknown function over a set of actions while ensuring that every sampled point has a function value below a given safety threshold.

no code implementations • 4 Oct 2022 • Ivan Lau, Yan Hao Ling, Mayank Shrivastava, Jonathan Scarlett

In this paper, we consider a bandit problem in which there are a number of groups each consisting of infinitely many arms.

no code implementations • 29 Jun 2022 • Jonathan Scarlett, Reinhard Heckel, Miguel R. D. Rodrigues, Paul Hand, Yonina C. Eldar

In recent years, there have been significant advances in the use of deep learning methods in inverse problems such as denoising, compressive sensing, inpainting, and super-resolution.

1 code implementation • ICLR 2022 • Zhaoqiang Liu, Jiulong Liu, Subhroshekhar Ghosh, Jun Han, Jonathan Scarlett

We perform experiments on various image datasets for spiked matrix and phase retrieval models, and illustrate performance gains of our method to the classic power method and the truncated power method devised for sparse principal component analysis.

no code implementations • 22 Feb 2022 • Xu Cai, Chi Thanh Lam, Jonathan Scarlett

In this paper, we study error bounds for {\em Bayesian quadrature} (BQ), with an emphasis on noisy settings, randomized algorithms, and average-case performance measures.

no code implementations • 8 Feb 2022 • Sattar Vakili, Jonathan Scarlett, Da-Shan Shiu, Alberto Bernacchia

Kernel-based models such as kernel ridge regression and Gaussian processes are ubiquitous in machine learning applications for regression and optimization.

no code implementations • 3 Feb 2022 • Ilija Bogunovic, Zihan Li, Andreas Krause, Jonathan Scarlett

We consider the sequential optimization of an unknown, continuous, and expensive to evaluate reward function, from noisy and adversarially corrupted observed rewards.

no code implementations • 17 Nov 2021 • Zhenlin Wang, Jonathan Scarlett

In this paper, we introduce a multi-armed bandit problem termed max-min grouped bandits, in which the arms are arranged in possibly-overlapping groups, and the goal is to find the group whose worst arm has the highest mean reward.

no code implementations • 28 Oct 2021 • Sattar Vakili, Jonathan Scarlett, Tara Javidi

Confidence intervals are a crucial building block in the analysis of various online learning problems.

1 code implementation • 16 Oct 2021 • Eric Han, Jonathan Scarlett

We focus primarily on targeted attacks on the popular GP-UCB algorithm and a related elimination-based algorithm, based on adversarially perturbing the function $f$ to produce another function $\tilde{f}$ whose optima are in some target region $\mathcal{R}_{\rm target}$.

no code implementations • 15 Oct 2021 • Zihan Li, Jonathan Scarlett

In addition, in the case of a constant number of batches (not depending on $T$), we propose a modified version of our algorithm, and characterize how the regret is impacted by the number of batches, focusing on the squared exponential and Mat\'ern kernels.

no code implementations • 8 Aug 2021 • Zhaoqiang Liu, Subhroshekhar Ghosh, Jun Han, Jonathan Scarlett

In 1-bit compressive sensing, each measurement is quantized to a single bit, namely the sign of a linear function of an unknown vector, and the goal is to accurately recover the vector.

1 code implementation • NeurIPS 2021 • Zhaoqiang Liu, Subhroshekhar Ghosh, Jonathan Scarlett

We also adapt this result to sparse phase retrieval, and show that $O(s \log n)$ samples are sufficient for a similar guarantee when the underlying signal is $s$-sparse and $n$-dimensional, matching an information-theoretic lower bound.

1 code implementation • 11 Feb 2021 • Xu Cai, Selwyn Gomes, Jonathan Scarlett

In this paper, we study the problem of Gaussian process (GP) bandits under relaxed optimization criteria stating that any function value above a certain threshold is "good enough".

1 code implementation • 24 Dec 2020 • Eric Han, Ishank Arora, Jonathan Scarlett

In addition, we propose a novel zooming-based algorithm that permits generalized additive models to be employed more efficiently in the case of continuous domains.

no code implementations • 20 Aug 2020 • Xu Cai, Jonathan Scarlett

In a robust setting in which every sampled point may be perturbed by a suitably-constrained adversary, we provide a novel lower bound for deterministic strategies, demonstrating an inevitable joint dependence of the cumulative regret on the corruption level and the time horizon, in contrast with existing lower bounds that only characterize the individual dependencies.

no code implementations • 7 Jul 2020 • Ilija Bogunovic, Arpan Losalka, Andreas Krause, Jonathan Scarlett

We consider a stochastic linear bandit problem in which the rewards are not only subject to random noise, but also adversarial attacks subject to a suitable budget $C$ (i. e., an upper bound on the sum of corruption magnitudes across the time horizon).

no code implementations • NeurIPS 2020 • Zhaoqiang Liu, Jonathan Scarlett

We make the assumption of sub-Gaussian measurements, which is satisfied by a wide range of measurement models, such as linear, logistic, 1-bit, and other quantized models.

no code implementations • 4 Mar 2020 • Ilija Bogunovic, Andreas Krause, Jonathan Scarlett

We consider the problem of optimizing an unknown (typically non-convex) function with a bounded norm in some Reproducing Kernel Hilbert Space (RKHS), based on noisy bandit feedback.

no code implementations • 20 Feb 2020 • Anamay Chaturvedi, Jonathan Scarlett

Graphical model selection in Markov random fields is a fundamental problem in statistics and machine learning.

1 code implementation • ICML 2020 • Zhaoqiang Liu, Selwyn Gomes, Avtansh Tiwari, Jonathan Scarlett

The goal of standard 1-bit compressive sensing is to accurately recover an unknown sparse vector from binary-valued measurements, each indicating the sign of a linear function of the vector.

no code implementations • 25 Jan 2020 • Zexin Wang, Vincent Y. F. Tan, Jonathan Scarlett

We consider the problem of Bayesian optimization of a one-dimensional Brownian motion in which the $T$ adaptively chosen observations are corrupted by Gaussian noise.

1 code implementation • NeurIPS 2019 • Zihan Li, Matthias Fresacher, Jonathan Scarlett

In this paper, we consider the problem of learning an unknown graph via queries on groups of nodes, with the result indicating whether or not at least one edge is present among those nodes.

1 code implementation • CVPR 2020 • Abdul Fatir Ansari, Jonathan Scarlett, Harold Soh

In this paper, we formulate the problem of learning an IGM as minimizing the expected distance between characteristic functions.

no code implementations • NeurIPS Workshop Deep_Invers 2019 • Zhaoqiang Liu, Jonathan Scarlett

The goal of standard compressive sensing is to estimate an unknown vector from linear measurements under the assumption of sparsity in some basis.

no code implementations • 28 Aug 2019 • Zhaoqiang Liu, Jonathan Scarlett

It has recently been shown that for compressive sensing, significantly fewer measurements may be required if the sparsity assumption is replaced by the assumption the unknown vector lies near the range of a suitably-chosen generative model.

1 code implementation • 9 May 2019 • Zihan Li, Matthias Fresacher, Jonathan Scarlett

In this paper, we consider the problem of learning an unknown graph via queries on groups of nodes, with the result indicating whether or not at least one edge is present among those nodes.

no code implementations • 15 Feb 2019 • Matthew Aldridge, Oliver Johnson, Jonathan Scarlett

The group testing problem concerns discovering a small number of defective items within a large population by performing tests on pools of items.

Information Theory Discrete Mathematics Information Theory Probability Statistics Theory Statistics Theory

no code implementations • 30 Jan 2019 • Lan V. Truong, Jonathan Scarlett

The support recovery problem consists of determining a sparse subset of variables that is relevant in generating a set of observations.

no code implementations • 2 Jan 2019 • Jonathan Scarlett, Volkan Cevher

Information theory plays an indispensable role in the development of algorithm-independent impossibility results, both for communication problems and for seemingly distinct areas such as statistics and machine learning.

no code implementations • NeurIPS 2018 • Ilija Bogunovic, Jonathan Scarlett, Stefanie Jegelka, Volkan Cevher

In this paper, we consider the problem of Gaussian process (GP) optimization with an added robustness requirement: The returned point may be perturbed by an adversary, and we require the function value to remain as high as possible even after this perturbation.

no code implementations • ICML 2018 • Jonathan Scarlett

We consider the problem of Bayesian optimization (BO) in one dimension, under a Gaussian process prior and Gaussian sampling noise.

no code implementations • 3 May 2018 • Baran Gözcü, Rabeeh Karimi Mahabadi, Yen-Huan Li, Efe Ilıcak, Tolga Çukur, Jonathan Scarlett, Volkan Cevher

In the area of magnetic resonance imaging (MRI), an extensive range of non-linear reconstruction algorithms have been proposed that can be used with general Fourier subsampling patterns.

1 code implementation • 20 Feb 2018 • Paul Rolland, Jonathan Scarlett, Ilija Bogunovic, Volkan Cevher

In this paper, we consider the approach of Kandasamy et al. (2015), in which the high-dimensional function decomposes as a sum of lower-dimensional functions on subsets of the underlying variables.

no code implementations • NeurIPS 2017 • Jonathan Scarlett, Volkan Cevher

In this paper, we study the pooled data problem of identifying the labels associated with a large collection of items, based on a sequence of pooled tests revealing the counts of each label within the pool.

no code implementations • ICML 2017 • Ilija Bogunovic, Slobodan Mitrović, Jonathan Scarlett, Volkan Cevher

We study the problem of maximizing a monotone submodular function subject to a cardinality constraint $k$, with the added twist that a number of items $\tau$ from the returned set may be removed.

no code implementations • 31 May 2017 • Jonathan Scarlett, Ilijia Bogunovic, Volkan Cevher

For the isotropic squared-exponential kernel in $d$ dimensions, we find that an average simple regret of $\epsilon$ requires $T = \Omega\big(\frac{1}{\epsilon^2} (\log\frac{1}{\epsilon})^{d/2}\big)$, and the average cumulative regret is at least $\Omega\big( \sqrt{T(\log T)^{d/2}} \big)$, thus matching existing upper bounds up to the replacement of $d/2$ by $2d+O(1)$ in both cases.

no code implementations • NeurIPS 2016 • Ilija Bogunovic, Jonathan Scarlett, Andreas Krause, Volkan Cevher

We present a new algorithm, truncated variance reduction (TruVaR), that treats Bayesian optimization (BO) and level-set estimation (LSE) with Gaussian processes in a unified fashion.

no code implementations • 8 Jul 2016 • Jonathan Scarlett, Volkan Cevher

We consider the problem of estimating the underlying graph associated with a Markov random field, with the added twist that the decoding algorithm can iteratively choose which subsets of nodes to sample based on the previous samples, resulting in an active learning setting.

no code implementations • 11 Feb 2016 • Jonathan Scarlett, Volkan Cevher

We adopt an \emph{approximate recovery} criterion that allows for a number of missed edges or incorrectly-included edges, in contrast with the widely-studied exact recovery problem.

no code implementations • 2 Feb 2016 • Jonathan Scarlett, Volkan Cevher

In this paper, we study the information-theoretic limits of community detection in the symmetric two-community stochastic block model, with intra-community and inter-community edge probabilities $\frac{a}{n}$ and $\frac{b}{n}$ respectively.

no code implementations • 25 Jan 2016 • Ilija Bogunovic, Jonathan Scarlett, Volkan Cevher

We illustrate the performance of the algorithms on both synthetic and real data, and we find the gradual forgetting of TV-GP-UCB to perform favorably compared to the sharp resetting of R-GP-UCB.

no code implementations • 21 Oct 2015 • Luca Baldassarre, Yen-Huan Li, Jonathan Scarlett, Baran Gözcü, Ilija Bogunovic, Volkan Cevher

In this paper, we instead take a principled learning-based approach in which a \emph{fixed} index set is chosen based on a set of training signals $\mathbf{x}_1,\dotsc,\mathbf{x}_m$.

no code implementations • 29 Jan 2015 • Jonathan Scarlett, Volkan Cevher

In several cases, our bounds not only provide matching scaling laws in the necessary and sufficient number of measurements, but also sharp thresholds with matching constant factors.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.