Search Results for author: Jonathan Scarlett

Found 48 papers, 11 papers with code

Kernelized Normalizing Constant Estimation: Bridging Bayesian Quadrature and Bayesian Optimization

no code implementations11 Jan 2024 Xu Cai, Jonathan Scarlett

In this paper, we study the problem of estimating the normalizing constant $\int e^{-\lambda f(x)}dx$ through queries to the black-box function $f$, where $f$ belongs to a reproducing kernel Hilbert space (RKHS), and $\lambda$ is a problem parameter.

Bayesian Optimization

Concomitant Group Testing

no code implementations8 Sep 2023 Thach V. Bui, Jonathan Scarlett

In this paper, we introduce a variation of the group testing problem capturing the idea that a positive test requires a combination of multiple ``types'' of item.

Communication-Constrained Bandits under Additive Gaussian Noise

no code implementations25 Apr 2023 Prathamesh Mayekar, Jonathan Scarlett, Vincent Y. F. Tan

We study a distributed stochastic multi-armed bandit where a client supplies the learner with communication-constrained feedback based on the rewards for the corresponding arm pulls.

Regret Bounds for Noise-Free Cascaded Kernelized Bandits

no code implementations10 Nov 2022 Zihan Li, Jonathan Scarlett

We consider optimizing a function network in the noise-free grey-box setting with RKHS function classes, where the exact intermediate results are observable.

Benefits of Monotonicity in Safe Exploration with Gaussian Processes

1 code implementation3 Nov 2022 Arpan Losalka, Jonathan Scarlett

We consider the problem of sequentially maximising an unknown function over a set of actions while ensuring that every sampled point has a function value below a given safety threshold.

Gaussian Processes Safe Exploration

Max-Quantile Grouped Infinite-Arm Bandits

no code implementations4 Oct 2022 Ivan Lau, Yan Hao Ling, Mayank Shrivastava, Jonathan Scarlett

In this paper, we consider a bandit problem in which there are a number of groups each consisting of infinitely many arms.

Theoretical Perspectives on Deep Learning Methods in Inverse Problems

no code implementations29 Jun 2022 Jonathan Scarlett, Reinhard Heckel, Miguel R. D. Rodrigues, Paul Hand, Yonina C. Eldar

In recent years, there have been significant advances in the use of deep learning methods in inverse problems such as denoising, compressive sensing, inpainting, and super-resolution.

Compressive Sensing Denoising +1

Generative Principal Component Analysis

1 code implementation ICLR 2022 Zhaoqiang Liu, Jiulong Liu, Subhroshekhar Ghosh, Jun Han, Jonathan Scarlett

We perform experiments on various image datasets for spiked matrix and phase retrieval models, and illustrate performance gains of our method to the classic power method and the truncated power method devised for sparse principal component analysis.

Retrieval

On Average-Case Error Bounds for Kernel-Based Bayesian Quadrature

no code implementations22 Feb 2022 Xu Cai, Chi Thanh Lam, Jonathan Scarlett

In this paper, we study error bounds for {\em Bayesian quadrature} (BQ), with an emphasis on noisy settings, randomized algorithms, and average-case performance measures.

Improved Convergence Rates for Sparse Approximation Methods in Kernel-Based Learning

no code implementations8 Feb 2022 Sattar Vakili, Jonathan Scarlett, Da-Shan Shiu, Alberto Bernacchia

Kernel-based models such as kernel ridge regression and Gaussian processes are ubiquitous in machine learning applications for regression and optimization.

Gaussian Processes regression

A Robust Phased Elimination Algorithm for Corruption-Tolerant Gaussian Process Bandits

no code implementations3 Feb 2022 Ilija Bogunovic, Zihan Li, Andreas Krause, Jonathan Scarlett

We consider the sequential optimization of an unknown, continuous, and expensive to evaluate reward function, from noisy and adversarially corrupted observed rewards.

Max-Min Grouped Bandits

no code implementations17 Nov 2021 Zhenlin Wang, Jonathan Scarlett

In this paper, we introduce a multi-armed bandit problem termed max-min grouped bandits, in which the arms are arranged in possibly-overlapping groups, and the goal is to find the group whose worst arm has the highest mean reward.

Recommendation Systems

Open Problem: Tight Online Confidence Intervals for RKHS Elements

no code implementations28 Oct 2021 Sattar Vakili, Jonathan Scarlett, Tara Javidi

Confidence intervals are a crucial building block in the analysis of various online learning problems.

Reinforcement Learning (RL)

Adversarial Attacks on Gaussian Process Bandits

1 code implementation16 Oct 2021 Eric Han, Jonathan Scarlett

We focus primarily on targeted attacks on the popular GP-UCB algorithm and a related elimination-based algorithm, based on adversarially perturbing the function $f$ to produce another function $\tilde{f}$ whose optima are in some target region $\mathcal{R}_{\rm target}$.

Adversarial Attack Gaussian Processes

Gaussian Process Bandit Optimization with Few Batches

no code implementations15 Oct 2021 Zihan Li, Jonathan Scarlett

In addition, in the case of a constant number of batches (not depending on $T$), we propose a modified version of our algorithm, and characterize how the regret is impacted by the number of batches, focusing on the squared exponential and Mat\'ern kernels.

Robust 1-bit Compressive Sensing with Partial Gaussian Circulant Matrices and Generative Priors

no code implementations8 Aug 2021 Zhaoqiang Liu, Subhroshekhar Ghosh, Jun Han, Jonathan Scarlett

In 1-bit compressive sensing, each measurement is quantized to a single bit, namely the sign of a linear function of an unknown vector, and the goal is to accurately recover the vector.

Compressive Sensing

Towards Sample-Optimal Compressive Phase Retrieval with Sparse and Generative Priors

1 code implementation NeurIPS 2021 Zhaoqiang Liu, Subhroshekhar Ghosh, Jonathan Scarlett

We also adapt this result to sparse phase retrieval, and show that $O(s \log n)$ samples are sufficient for a similar guarantee when the underlying signal is $s$-sparse and $n$-dimensional, matching an information-theoretic lower bound.

Compressive Sensing Retrieval

Lenient Regret and Good-Action Identification in Gaussian Process Bandits

1 code implementation11 Feb 2021 Xu Cai, Selwyn Gomes, Jonathan Scarlett

In this paper, we study the problem of Gaussian process (GP) bandits under relaxed optimization criteria stating that any function value above a certain threshold is "good enough".

High-Dimensional Bayesian Optimization via Tree-Structured Additive Models

1 code implementation24 Dec 2020 Eric Han, Ishank Arora, Jonathan Scarlett

In addition, we propose a novel zooming-based algorithm that permits generalized additive models to be employed more efficiently in the case of continuous domains.

Additive models Bayesian Optimization +2

On Lower Bounds for Standard and Robust Gaussian Process Bandit Optimization

no code implementations20 Aug 2020 Xu Cai, Jonathan Scarlett

In a robust setting in which every sampled point may be perturbed by a suitably-constrained adversary, we provide a novel lower bound for deterministic strategies, demonstrating an inevitable joint dependence of the cumulative regret on the corruption level and the time horizon, in contrast with existing lower bounds that only characterize the individual dependencies.

Stochastic Linear Bandits Robust to Adversarial Attacks

no code implementations7 Jul 2020 Ilija Bogunovic, Arpan Losalka, Andreas Krause, Jonathan Scarlett

We consider a stochastic linear bandit problem in which the rewards are not only subject to random noise, but also adversarial attacks subject to a suitable budget $C$ (i. e., an upper bound on the sum of corruption magnitudes across the time horizon).

The Generalized Lasso with Nonlinear Observations and Generative Priors

no code implementations NeurIPS 2020 Zhaoqiang Liu, Jonathan Scarlett

We make the assumption of sub-Gaussian measurements, which is satisfied by a wide range of measurement models, such as linear, logistic, 1-bit, and other quantized models.

Corruption-Tolerant Gaussian Process Bandit Optimization

no code implementations4 Mar 2020 Ilija Bogunovic, Andreas Krause, Jonathan Scarlett

We consider the problem of optimizing an unknown (typically non-convex) function with a bounded norm in some Reproducing Kernel Hilbert Space (RKHS), based on noisy bandit feedback.

Learning Gaussian Graphical Models via Multiplicative Weights

no code implementations20 Feb 2020 Anamay Chaturvedi, Jonathan Scarlett

Graphical model selection in Markov random fields is a fundamental problem in statistics and machine learning.

Model Selection

Sample Complexity Bounds for 1-bit Compressive Sensing and Binary Stable Embeddings with Generative Priors

1 code implementation ICML 2020 Zhaoqiang Liu, Selwyn Gomes, Avtansh Tiwari, Jonathan Scarlett

The goal of standard 1-bit compressive sensing is to accurately recover an unknown sparse vector from binary-valued measurements, each indicating the sign of a linear function of the vector.

Compressive Sensing

Tight Regret Bounds for Noisy Optimization of a Brownian Motion

no code implementations25 Jan 2020 Zexin Wang, Vincent Y. F. Tan, Jonathan Scarlett

We consider the problem of Bayesian optimization of a one-dimensional Brownian motion in which the $T$ adaptively chosen observations are corrupted by Gaussian noise.

Bayesian Optimization Two-sample testing

Learning Erdos-Renyi Random Graphs via Edge Detecting Queries

1 code implementation NeurIPS 2019 Zihan Li, Matthias Fresacher, Jonathan Scarlett

In this paper, we consider the problem of learning an unknown graph via queries on groups of nodes, with the result indicating whether or not at least one edge is present among those nodes.

A Characteristic Function Approach to Deep Implicit Generative Modeling

1 code implementation CVPR 2020 Abdul Fatir Ansari, Jonathan Scarlett, Harold Soh

In this paper, we formulate the problem of learning an IGM as minimizing the expected distance between characteristic functions.

Image Generation

Sample Complexity Lower Bounds for Compressive Sensing with Generative Models

no code implementations NeurIPS Workshop Deep_Invers 2019 Zhaoqiang Liu, Jonathan Scarlett

The goal of standard compressive sensing is to estimate an unknown vector from linear measurements under the assumption of sparsity in some basis.

Compressive Sensing

Information-Theoretic Lower Bounds for Compressive Sensing with Generative Models

no code implementations28 Aug 2019 Zhaoqiang Liu, Jonathan Scarlett

It has recently been shown that for compressive sensing, significantly fewer measurements may be required if the sparsity assumption is replaced by the assumption the unknown vector lies near the range of a suitably-chosen generative model.

Compressive Sensing

Learning Erdős-Rényi Random Graphs via Edge Detecting Queries

1 code implementation9 May 2019 Zihan Li, Matthias Fresacher, Jonathan Scarlett

In this paper, we consider the problem of learning an unknown graph via queries on groups of nodes, with the result indicating whether or not at least one edge is present among those nodes.

Group Testing: An Information Theory Perspective

no code implementations15 Feb 2019 Matthew Aldridge, Oliver Johnson, Jonathan Scarlett

The group testing problem concerns discovering a small number of defective items within a large population by performing tests on pools of items.

Information Theory Discrete Mathematics Information Theory Probability Statistics Theory Statistics Theory

Support Recovery in the Phase Retrieval Model: Information-Theoretic Fundamental Limits

no code implementations30 Jan 2019 Lan V. Truong, Jonathan Scarlett

The support recovery problem consists of determining a sparse subset of variables that is relevant in generating a set of observations.

Retrieval

An Introductory Guide to Fano's Inequality with Applications in Statistical Estimation

no code implementations2 Jan 2019 Jonathan Scarlett, Volkan Cevher

Information theory plays an indispensable role in the development of algorithm-independent impossibility results, both for communication problems and for seemingly distinct areas such as statistics and machine learning.

Density Estimation Model Selection +1

Adversarially Robust Optimization with Gaussian Processes

no code implementations NeurIPS 2018 Ilija Bogunovic, Jonathan Scarlett, Stefanie Jegelka, Volkan Cevher

In this paper, we consider the problem of Gaussian process (GP) optimization with an added robustness requirement: The returned point may be perturbed by an adversary, and we require the function value to remain as high as possible even after this perturbation.

Gaussian Processes

Tight Regret Bounds for Bayesian Optimization in One Dimension

no code implementations ICML 2018 Jonathan Scarlett

We consider the problem of Bayesian optimization (BO) in one dimension, under a Gaussian process prior and Gaussian sampling noise.

Bayesian Optimization

Learning-Based Compressive MRI

no code implementations3 May 2018 Baran Gözcü, Rabeeh Karimi Mahabadi, Yen-Huan Li, Efe Ilıcak, Tolga Çukur, Jonathan Scarlett, Volkan Cevher

In the area of magnetic resonance imaging (MRI), an extensive range of non-linear reconstruction algorithms have been proposed that can be used with general Fourier subsampling patterns.

Anatomy Learning Theory

High-Dimensional Bayesian Optimization via Additive Models with Overlapping Groups

1 code implementation20 Feb 2018 Paul Rolland, Jonathan Scarlett, Ilija Bogunovic, Volkan Cevher

In this paper, we consider the approach of Kandasamy et al. (2015), in which the high-dimensional function decomposes as a sum of lower-dimensional functions on subsets of the underlying variables.

Additive models Bayesian Optimization +2

Phase Transitions in the Pooled Data Problem

no code implementations NeurIPS 2017 Jonathan Scarlett, Volkan Cevher

In this paper, we study the pooled data problem of identifying the labels associated with a large collection of items, based on a sequence of pooled tests revealing the counts of each label within the pool.

Robust Submodular Maximization: A Non-Uniform Partitioning Approach

no code implementations ICML 2017 Ilija Bogunovic, Slobodan Mitrović, Jonathan Scarlett, Volkan Cevher

We study the problem of maximizing a monotone submodular function subject to a cardinality constraint $k$, with the added twist that a number of items $\tau$ from the returned set may be removed.

Data Summarization

Lower Bounds on Regret for Noisy Gaussian Process Bandit Optimization

no code implementations31 May 2017 Jonathan Scarlett, Ilijia Bogunovic, Volkan Cevher

For the isotropic squared-exponential kernel in $d$ dimensions, we find that an average simple regret of $\epsilon$ requires $T = \Omega\big(\frac{1}{\epsilon^2} (\log\frac{1}{\epsilon})^{d/2}\big)$, and the average cumulative regret is at least $\Omega\big( \sqrt{T(\log T)^{d/2}} \big)$, thus matching existing upper bounds up to the replacement of $d/2$ by $2d+O(1)$ in both cases.

Truncated Variance Reduction: A Unified Approach to Bayesian Optimization and Level-Set Estimation

no code implementations NeurIPS 2016 Ilija Bogunovic, Jonathan Scarlett, Andreas Krause, Volkan Cevher

We present a new algorithm, truncated variance reduction (TruVaR), that treats Bayesian optimization (BO) and level-set estimation (LSE) with Gaussian processes in a unified fashion.

Bayesian Optimization Gaussian Processes

Lower Bounds on Active Learning for Graphical Model Selection

no code implementations8 Jul 2016 Jonathan Scarlett, Volkan Cevher

We consider the problem of estimating the underlying graph associated with a Markov random field, with the added twist that the decoding algorithm can iteratively choose which subsets of nodes to sample based on the previous samples, resulting in an active learning setting.

Active Learning Model Selection

On the Difficulty of Selecting Ising Models with Approximate Recovery

no code implementations11 Feb 2016 Jonathan Scarlett, Volkan Cevher

We adopt an \emph{approximate recovery} criterion that allows for a number of missed edges or incorrectly-included edges, in contrast with the widely-studied exact recovery problem.

Partial Recovery Bounds for the Sparse Stochastic Block Model

no code implementations2 Feb 2016 Jonathan Scarlett, Volkan Cevher

In this paper, we study the information-theoretic limits of community detection in the symmetric two-community stochastic block model, with intra-community and inter-community edge probabilities $\frac{a}{n}$ and $\frac{b}{n}$ respectively.

Community Detection Stochastic Block Model

Time-Varying Gaussian Process Bandit Optimization

no code implementations25 Jan 2016 Ilija Bogunovic, Jonathan Scarlett, Volkan Cevher

We illustrate the performance of the algorithms on both synthetic and real data, and we find the gradual forgetting of TV-GP-UCB to perform favorably compared to the sharp resetting of R-GP-UCB.

Bayesian Optimization

Learning-based Compressive Subsampling

no code implementations21 Oct 2015 Luca Baldassarre, Yen-Huan Li, Jonathan Scarlett, Baran Gözcü, Ilija Bogunovic, Volkan Cevher

In this paper, we instead take a principled learning-based approach in which a \emph{fixed} index set is chosen based on a set of training signals $\mathbf{x}_1,\dotsc,\mathbf{x}_m$.

Combinatorial Optimization

Limits on Support Recovery with Probabilistic Models: An Information-Theoretic Framework

no code implementations29 Jan 2015 Jonathan Scarlett, Volkan Cevher

In several cases, our bounds not only provide matching scaling laws in the necessary and sufficient number of measurements, but also sharp thresholds with matching constant factors.

Compressive Sensing

Cannot find the paper you are looking for? You can Submit a new open access paper.