Search Results for author: Robert Nowak

Found 47 papers, 9 papers with code

Efficient Active Learning with Abstention

no code implementations31 Mar 2022 Yinglun Zhu, Robert Nowak

Furthermore, the algorithm is guaranteed to only abstain on hard examples (where the true label distribution is close to a fair coin), a novel property we term "proper abstention" that also leads to a host of other desirable characteristics.

Active Learning

ReVar: Strengthening Policy Evaluation via Reduced Variance Sampling

no code implementations9 Mar 2022 Subhojyoti Mukherjee, Josiah P. Hanna, Robert Nowak

This paper studies the problem of data collection for policy evaluation in Markov decision processes (MDPs).

Training OOD Detectors in their Natural Habitats

no code implementations7 Feb 2022 Julian Katz-Samuels, Julia Nakhleh, Robert Nowak, Yixuan Li

Out-of-distribution (OOD) detection is important for machine learning models deployed in the wild.

OOD Detection

GALAXY: Graph-based Active Learning at the Extreme

1 code implementation3 Feb 2022 Jifan Zhang, Julian Katz-Samuels, Robert Nowak

Active learning is a label-efficient approach to train highly effective models while interactively selecting only small subsets of unlabelled data for labelling and training.

Active Learning

Nearly Optimal Algorithms for Level Set Estimation

no code implementations2 Nov 2021 Blake Mason, Romain Camilleri, Subhojyoti Mukherjee, Kevin Jamieson, Robert Nowak, Lalit Jain

The threshold value $\alpha$ can either be \emph{explicit} and provided a priori, or \emph{implicit} and defined relative to the optimal function value, i. e. $\alpha = (1-\epsilon)f(x_\ast)$ for a given $\epsilon > 0$ where $f(x_\ast)$ is the maximal function value and is unknown.

Experimental Design

Near Instance Optimal Model Selection for Pure Exploration Linear Bandits

no code implementations10 Sep 2021 Yinglun Zhu, Julian Katz-Samuels, Robert Nowak

The core of our algorithms is a new optimization problem based on experimental design that leverages the geometry of the action set to identify a near-optimal hypothesis class.

Experimental Design Model Selection

Pure Exploration in Kernel and Neural Bandits

no code implementations NeurIPS 2021 Yinglun Zhu, Dongruo Zhou, Ruoxi Jiang, Quanquan Gu, Rebecca Willett, Robert Nowak

To overcome the curse of dimensionality, we propose to adaptively embed the feature representation of each arm into a lower-dimensional space and carefully deal with the induced model misspecification.

Nearest Neighbor Search Under Uncertainty

no code implementations8 Mar 2021 Blake Mason, Ardhendu Tripathy, Robert Nowak

Specifically, consider the setting in which an NNS algorithm has access only to a stochastic distance oracle that provides a noisy, unbiased estimate of the distance between any pair of points, rather than the exact distance.

Multi-Armed Bandits Representation Learning

Pareto Optimal Model Selection in Linear Bandits

no code implementations12 Feb 2021 Yinglun Zhu, Robert Nowak

In this paper, we establish the first lower bound for the model selection problem.

Model Selection

Chernoff Sampling for Active Testing and Extension to Active Regression

no code implementations15 Dec 2020 Subhojyoti Mukherjee, Ardhendu Tripathy, Robert Nowak

Active learning can reduce the number of samples needed to perform a hypothesis test and to estimate the parameters of a model.

Active Learning Experimental Design

Finding All $\epsilon$-Good Arms in Stochastic Bandits

no code implementations NeurIPS 2020 Blake Mason, Lalit Jain, Ardhendu Tripathy, Robert Nowak

The pure-exploration problem in stochastic multi-armed bandits aims to find one or more arms with the largest (or near largest) means.

Multi-Armed Bandits

Robust Outlier Arm Identification

1 code implementation ICML 2020 Yinglun Zhu, Sumeet Katariya, Robert Nowak

We study the problem of Robust Outlier Arm Identification (ROAI), where the goal is to identify arms whose expected rewards deviate substantially from the majority, by adaptively sampling from their reward distributions.

Outlier Detection

On Regret with Multiple Best Arms

no code implementations NeurIPS 2020 Yinglun Zhu, Robert Nowak

With additional knowledge of the expected reward of the best arm, we propose another adaptive algorithm that is minimax optimal, up to polylog factors, over all hardness levels.

Finding All ε-Good Arms in Stochastic Bandits

1 code implementation16 Jun 2020 Blake Mason, Lalit Jain, Ardhendu Tripathy, Robert Nowak

Mathematically, the all-{\epsilon}-good arm identification problem presents significant new challenges and surprises that do not arise in the pure-exploration objectives studied in the past.

Multi-Armed Bandits

Should Adversarial Attacks Use Pixel p-Norm?

no code implementations6 Jun 2019 Ayon Sen, Xiaojin Zhu, Liam Marshall, Robert Nowak

Adversarial attacks aim to confound machine learning systems, while remaining virtually imperceptible to humans.

Adversarial Attack General Classification +1

MaxGap Bandit: Adaptive Algorithms for Approximate Ranking

1 code implementation NeurIPS 2019 Sumeet Katariya, Ardhendu Tripathy, Robert Nowak

This paper studies the problem of adaptively sampling from K distributions (arms) in order to identify the largest gap between any two adjacent means.

Outlier Detection

Learning Nearest Neighbor Graphs from Noisy Distance Samples

1 code implementation NeurIPS 2019 Blake Mason, Ardhendu Tripathy, Robert Nowak

We consider the problem of learning the nearest neighbor graph of a dataset of n items.

Linear Bandits with Feature Feedback

no code implementations9 Mar 2019 Urvashi Oswal, Aniruddha Bhargava, Robert Nowak

In comparison, the regret of traditional linear bandits is $d\sqrt{T}$, where $d$ is the total number of (relevant and irrelevant) features, so the improvement can be dramatic if $k\ll d$.

Bilinear Bandits with Low-rank Structure

no code implementations8 Jan 2019 Kwang-Sung Jun, Rebecca Willett, Stephen Wright, Robert Nowak

We introduce the bilinear bandit problem with low-rank structure in which an action takes the form of a pair of arms from two different entity types, and the reward is a bilinear function of the known feature vectors of the arms.

Scalable Sparse Subspace Clustering via Ordered Weighted $\ell_1$ Regression

no code implementations10 Jul 2018 Urvashi Oswal, Robert Nowak

The main contribution of the paper is a new approach to subspace clustering that is significantly more computationally efficient and scalable than existing state-of-the-art methods.

Teacher Improves Learning by Selecting a Training Subset

no code implementations25 Feb 2018 Yuzhe Ma, Robert Nowak, Philippe Rigollet, Xuezhou Zhang, Xiaojin Zhu

We call a learner super-teachable if a teacher can trim down an iid training set while making the learner learn even better.

General Classification

Adaptive Sampling for Coarse Ranking

1 code implementation20 Feb 2018 Sumeet Katariya, Lalit Jain, Nandana Sengupta, James Evans, Robert Nowak

We consider the problem of active coarse ranking, where the goal is to sort items according to their means into clusters of pre-specified sizes, by adaptively sampling from their reward distributions.

Random Consensus Robust PCA

1 code implementation AISTATS, Electronic Journal of Statistics 2017 Daniel Pimentel-Alarcon, Robert Nowak

This paper presents r2pca, a random con- sensus method for robust principal compo- nent analysis.

Learning Low-Dimensional Metrics

no code implementations NeurIPS 2017 Lalit Jain, Blake Mason, Robert Nowak

This paper investigates the theoretical foundations of metric learning, focused on three key questions that are not fully addressed in prior work: 1) we consider learning general low-dimensional (low-rank) metrics as well as sparse metrics; 2) we develop upper and lower (minimax)bounds on the generalization error; 3) we quantify the sample complexity of metric learning in terms of the dimension of the feature space and the dimension/rank of the underlying metric;4) we also bound the accuracy of the learned metric relative to the underlying true generative metric.

Metric Learning

Coalescent-based species tree estimation: a stochastic Farris transform

no code implementations13 Jul 2017 Gautam Dasarathy, Elchanan Mossel, Robert Nowak, Sebastien Roch

As a corollary, we also obtain a new identifiability result of independent interest: for any species tree with $n \geq 3$ species, the rooted species tree can be identified from the distribution of its unrooted weighted gene trees even in the absence of a molecular clock.

Scalable Generalized Linear Bandits: Online Computation and Hashing

no code implementations NeurIPS 2017 Kwang-Sung Jun, Aniruddha Bhargava, Robert Nowak, Rebecca Willett

Second, for the case where the number $N$ of arms is very large, we propose new algorithms in which each next arm is selected via an inner product search.

online learning

Graph-Based Active Learning: A New Look at Expected Error Minimization

no code implementations3 Sep 2016 Kwang-Sung Jun, Robert Nowak

In graph-based active learning, algorithms based on expected error minimization (EEM) have been popular and yield good empirical performance.

Active Learning

Finite Sample Prediction and Recovery Bounds for Ordinal Embedding

no code implementations NeurIPS 2016 Lalit Jain, Kevin Jamieson, Robert Nowak

First, we derive prediction error bounds for ordinal embedding with noise by exploiting the fact that the rank of a distance matrix of points in $\mathbb{R}^d$ is at most $d+2$.

Active Algorithms For Preference Learning Problems with Multiple Populations

no code implementations14 Mar 2016 Aniruddha Bhargava, Ravi Ganti, Robert Nowak

In this paper we model the problem of learning preferences of a population as an active learning problem.

Active Learning

On Learning High Dimensional Structured Single Index Models

no code implementations13 Mar 2016 Nikhil Rao, Ravi Ganti, Laura Balzano, Rebecca Willett, Robert Nowak

Single Index Models (SIMs) are simple yet flexible semi-parametric models for machine learning, where the response variable is modeled as a monotonic function of a linear combination of features.

Learning Single Index Models in High Dimensions

no code implementations30 Jun 2015 Ravi Ganti, Nikhil Rao, Rebecca M. Willett, Robert Nowak

Single Index Models (SIMs) are simple yet flexible semi-parametric models for classification and regression.

General Classification

Sparse Dueling Bandits

no code implementations31 Jan 2015 Kevin Jamieson, Sumeet Katariya, Atul Deshpande, Robert Nowak

We prove that in the absence of structural assumptions, the sample complexity of this problem is proportional to the sum of the inverse squared gaps between the Borda scores of each suboptimal arm and the best arm.

Data Requirement for Phylogenetic Inference from Multiple Loci: A New Distance Method

no code implementations28 Apr 2014 Gautam Dasarathy, Robert Nowak, Sebastien Roch

We consider the problem of estimating the evolutionary history of a set of species (phylogeny or species tree) from several genes.

Classification with Sparse Overlapping Groups

no code implementations18 Feb 2014 Nikhil Rao, Robert Nowak, Christopher Cox, Timothy Rogers

In this paper, we are interested in a less restrictive form of structured sparse feature selection: we assume that while features can be grouped according to some notion of similarity, not all features in a group need be selected for the task at hand.

Classification feature selection +2

lil' UCB : An Optimal Exploration Algorithm for Multi-Armed Bandits

no code implementations27 Dec 2013 Kevin Jamieson, Matthew Malloy, Robert Nowak, Sébastien Bubeck

The paper proposes a novel upper confidence bound (UCB) procedure for identifying the arm with the largest mean in a multi-armed bandit game in the fixed confidence setting using a small number of total samples.

Multi-Armed Bandits

Sparse Overlapping Sets Lasso for Multitask Learning and its Application to fMRI Analysis

no code implementations NeurIPS 2013 Nikhil Rao, Christopher Cox, Robert Nowak, Timothy Rogers

In this paper, we are interested in a less restrictive form of multitask learning, wherein (1) the available features can be organized into subsets according to a notion of similarity and (2) features useful in one task are similar, but not necessarily identical, to the features best suited for other tasks.

On Finding the Largest Mean Among Many

no code implementations17 Jun 2013 Kevin Jamieson, Matthew Malloy, Robert Nowak, Sebastien Bubeck

Motivated by large-scale applications, we are especially interested in identifying situations where the total number of samples that are necessary and sufficient to find the best arm scale linearly with the number of arms.

Multi-Armed Bandits

Query Complexity of Derivative-Free Optimization

no code implementations NeurIPS 2012 Kevin G. Jamieson, Robert Nowak, Ben Recht

Moreover, if the function evaluations are noisy, then approximating gradients by finite differences is difficult.

Text-to-Image Generation

Online Identification and Tracking of Subspaces from Highly Incomplete Information

1 code implementation21 Jun 2010 Laura Balzano, Robert Nowak, Benjamin Recht

GROUSE performs exceptionally well in practice both in tracking subspaces and as an online algorithm for matrix completion.

Matrix Completion

Noisy Generalized Binary Search

no code implementations NeurIPS 2009 Robert Nowak

This paper addresses the problem of noisy Generalized Binary Search (GBS).

Active Learning

Unlabeled data: Now it helps, now it doesn't

no code implementations NeurIPS 2008 Aarti Singh, Robert Nowak, Jerry Zhu

We show that there are large classes of problems for which SSL can significantly outperform supervised learning, in finite sample regimes and sometimes also in terms of error convergence rates.

Cannot find the paper you are looking for? You can Submit a new open access paper.