Search Results for author: Robert Nowak

Found 59 papers, 13 papers with code

Unlabeled data: Now it helps, now it doesn't

no code implementations NeurIPS 2008 Aarti Singh, Robert Nowak, Jerry Zhu

We show that there are large classes of problems for which SSL can significantly outperform supervised learning, in finite sample regimes and sometimes also in terms of error convergence rates.

Online Identification and Tracking of Subspaces from Highly Incomplete Information

1 code implementation21 Jun 2010 Laura Balzano, Robert Nowak, Benjamin Recht

GROUSE performs exceptionally well in practice both in tracking subspaces and as an online algorithm for matrix completion.

Matrix Completion

Query Complexity of Derivative-Free Optimization

no code implementations NeurIPS 2012 Kevin G. Jamieson, Robert Nowak, Ben Recht

Moreover, if the function evaluations are noisy, then approximating gradients by finite differences is difficult.

Text-to-Image Generation

On Finding the Largest Mean Among Many

no code implementations17 Jun 2013 Kevin Jamieson, Matthew Malloy, Robert Nowak, Sebastien Bubeck

Motivated by large-scale applications, we are especially interested in identifying situations where the total number of samples that are necessary and sufficient to find the best arm scale linearly with the number of arms.

Multi-Armed Bandits

Sparse Overlapping Sets Lasso for Multitask Learning and its Application to fMRI Analysis

no code implementations NeurIPS 2013 Nikhil Rao, Christopher Cox, Robert Nowak, Timothy Rogers

In this paper, we are interested in a less restrictive form of multitask learning, wherein (1) the available features can be organized into subsets according to a notion of similarity and (2) features useful in one task are similar, but not necessarily identical, to the features best suited for other tasks.

lil' UCB : An Optimal Exploration Algorithm for Multi-Armed Bandits

no code implementations27 Dec 2013 Kevin Jamieson, Matthew Malloy, Robert Nowak, Sébastien Bubeck

The paper proposes a novel upper confidence bound (UCB) procedure for identifying the arm with the largest mean in a multi-armed bandit game in the fixed confidence setting using a small number of total samples.

Multi-Armed Bandits

Classification with Sparse Overlapping Groups

no code implementations18 Feb 2014 Nikhil Rao, Robert Nowak, Christopher Cox, Timothy Rogers

In this paper, we are interested in a less restrictive form of structured sparse feature selection: we assume that while features can be grouped according to some notion of similarity, not all features in a group need be selected for the task at hand.

Classification feature selection +2

Data Requirement for Phylogenetic Inference from Multiple Loci: A New Distance Method

no code implementations28 Apr 2014 Gautam Dasarathy, Robert Nowak, Sebastien Roch

We consider the problem of estimating the evolutionary history of a set of species (phylogeny or species tree) from several genes.

Sparse Dueling Bandits

no code implementations31 Jan 2015 Kevin Jamieson, Sumeet Katariya, Atul Deshpande, Robert Nowak

We prove that in the absence of structural assumptions, the sample complexity of this problem is proportional to the sum of the inverse squared gaps between the Borda scores of each suboptimal arm and the best arm.

Learning Single Index Models in High Dimensions

no code implementations30 Jun 2015 Ravi Ganti, Nikhil Rao, Rebecca M. Willett, Robert Nowak

Single Index Models (SIMs) are simple yet flexible semi-parametric models for classification and regression.

General Classification Vocal Bursts Intensity Prediction

On Learning High Dimensional Structured Single Index Models

no code implementations13 Mar 2016 Nikhil Rao, Ravi Ganti, Laura Balzano, Rebecca Willett, Robert Nowak

Single Index Models (SIMs) are simple yet flexible semi-parametric models for machine learning, where the response variable is modeled as a monotonic function of a linear combination of features.

Vocal Bursts Intensity Prediction

Active Algorithms For Preference Learning Problems with Multiple Populations

no code implementations14 Mar 2016 Aniruddha Bhargava, Ravi Ganti, Robert Nowak

In this paper we model the problem of learning preferences of a population as an active learning problem.

Active Learning

Finite Sample Prediction and Recovery Bounds for Ordinal Embedding

no code implementations NeurIPS 2016 Lalit Jain, Kevin Jamieson, Robert Nowak

First, we derive prediction error bounds for ordinal embedding with noise by exploiting the fact that the rank of a distance matrix of points in $\mathbb{R}^d$ is at most $d+2$.

Graph-Based Active Learning: A New Look at Expected Error Minimization

no code implementations3 Sep 2016 Kwang-Sung Jun, Robert Nowak

In graph-based active learning, algorithms based on expected error minimization (EEM) have been popular and yield good empirical performance.

Active Learning

Scalable Generalized Linear Bandits: Online Computation and Hashing

no code implementations NeurIPS 2017 Kwang-Sung Jun, Aniruddha Bhargava, Robert Nowak, Rebecca Willett

Second, for the case where the number $N$ of arms is very large, we propose new algorithms in which each next arm is selected via an inner product search.

Thompson Sampling

Coalescent-based species tree estimation: a stochastic Farris transform

no code implementations13 Jul 2017 Gautam Dasarathy, Elchanan Mossel, Robert Nowak, Sebastien Roch

As a corollary, we also obtain a new identifiability result of independent interest: for any species tree with $n \geq 3$ species, the rooted species tree can be identified from the distribution of its unrooted weighted gene trees even in the absence of a molecular clock.

Learning Low-Dimensional Metrics

no code implementations NeurIPS 2017 Lalit Jain, Blake Mason, Robert Nowak

This paper investigates the theoretical foundations of metric learning, focused on three key questions that are not fully addressed in prior work: 1) we consider learning general low-dimensional (low-rank) metrics as well as sparse metrics; 2) we develop upper and lower (minimax)bounds on the generalization error; 3) we quantify the sample complexity of metric learning in terms of the dimension of the feature space and the dimension/rank of the underlying metric;4) we also bound the accuracy of the learned metric relative to the underlying true generative metric.

Metric Learning

Random Consensus Robust PCA

1 code implementation AISTATS, Electronic Journal of Statistics 2017 Daniel Pimentel-Alarcon, Robert Nowak

This paper presents r2pca, a random con- sensus method for robust principal compo- nent analysis.

Adaptive Sampling for Coarse Ranking

1 code implementation20 Feb 2018 Sumeet Katariya, Lalit Jain, Nandana Sengupta, James Evans, Robert Nowak

We consider the problem of active coarse ranking, where the goal is to sort items according to their means into clusters of pre-specified sizes, by adaptively sampling from their reward distributions.

Teacher Improves Learning by Selecting a Training Subset

no code implementations25 Feb 2018 Yuzhe Ma, Robert Nowak, Philippe Rigollet, Xuezhou Zhang, Xiaojin Zhu

We call a learner super-teachable if a teacher can trim down an iid training set while making the learner learn even better.

General Classification regression

Scalable Sparse Subspace Clustering via Ordered Weighted $\ell_1$ Regression

no code implementations10 Jul 2018 Urvashi Oswal, Robert Nowak

The main contribution of the paper is a new approach to subspace clustering that is significantly more computationally efficient and scalable than existing state-of-the-art methods.

Clustering regression

Bilinear Bandits with Low-rank Structure

no code implementations8 Jan 2019 Kwang-Sung Jun, Rebecca Willett, Stephen Wright, Robert Nowak

We introduce the bilinear bandit problem with low-rank structure in which an action takes the form of a pair of arms from two different entity types, and the reward is a bilinear function of the known feature vectors of the arms.

Linear Bandits with Feature Feedback

no code implementations9 Mar 2019 Urvashi Oswal, Aniruddha Bhargava, Robert Nowak

In comparison, the regret of traditional linear bandits is $d\sqrt{T}$, where $d$ is the total number of (relevant and irrelevant) features, so the improvement can be dramatic if $k\ll d$.

MaxGap Bandit: Adaptive Algorithms for Approximate Ranking

1 code implementation NeurIPS 2019 Sumeet Katariya, Ardhendu Tripathy, Robert Nowak

This paper studies the problem of adaptively sampling from K distributions (arms) in order to identify the largest gap between any two adjacent means.

Outlier Detection

Should Adversarial Attacks Use Pixel p-Norm?

no code implementations6 Jun 2019 Ayon Sen, Xiaojin Zhu, Liam Marshall, Robert Nowak

Adversarial attacks aim to confound machine learning systems, while remaining virtually imperceptible to humans.

Adversarial Attack BIG-bench Machine Learning +2

Finding All ε-Good Arms in Stochastic Bandits

1 code implementation16 Jun 2020 Blake Mason, Lalit Jain, Ardhendu Tripathy, Robert Nowak

Mathematically, the all-{\epsilon}-good arm identification problem presents significant new challenges and surprises that do not arise in the pure-exploration objectives studied in the past.

Multi-Armed Bandits

On Regret with Multiple Best Arms

no code implementations NeurIPS 2020 Yinglun Zhu, Robert Nowak

With additional knowledge of the expected reward of the best arm, we propose another adaptive algorithm that is minimax optimal, up to polylog factors, over all hardness levels.

Robust Outlier Arm Identification

1 code implementation ICML 2020 Yinglun Zhu, Sumeet Katariya, Robert Nowak

We study the problem of Robust Outlier Arm Identification (ROAI), where the goal is to identify arms whose expected rewards deviate substantially from the majority, by adaptively sampling from their reward distributions.

Outlier Detection

Finding All $\epsilon$-Good Arms in Stochastic Bandits

no code implementations NeurIPS 2020 Blake Mason, Lalit Jain, Ardhendu Tripathy, Robert Nowak

The pure-exploration problem in stochastic multi-armed bandits aims to find one or more arms with the largest (or near largest) means.

Multi-Armed Bandits

Chernoff Sampling for Active Testing and Extension to Active Regression

no code implementations15 Dec 2020 Subhojyoti Mukherjee, Ardhendu Tripathy, Robert Nowak

Active learning can reduce the number of samples needed to perform a hypothesis test and to estimate the parameters of a model.

Active Learning Experimental Design +1

Pareto Optimal Model Selection in Linear Bandits

no code implementations12 Feb 2021 Yinglun Zhu, Robert Nowak

In this paper, we establish the first lower bound for the model selection problem.

Model Selection

Nearest Neighbor Search Under Uncertainty

no code implementations8 Mar 2021 Blake Mason, Ardhendu Tripathy, Robert Nowak

Specifically, consider the setting in which an NNS algorithm has access only to a stochastic distance oracle that provides a noisy, unbiased estimate of the distance between any pair of points, rather than the exact distance.

Multi-Armed Bandits Representation Learning

Pure Exploration in Kernel and Neural Bandits

no code implementations NeurIPS 2021 Yinglun Zhu, Dongruo Zhou, Ruoxi Jiang, Quanquan Gu, Rebecca Willett, Robert Nowak

To overcome the curse of dimensionality, we propose to adaptively embed the feature representation of each arm into a lower-dimensional space and carefully deal with the induced model misspecification.

Near Instance Optimal Model Selection for Pure Exploration Linear Bandits

no code implementations10 Sep 2021 Yinglun Zhu, Julian Katz-Samuels, Robert Nowak

The core of our algorithms is a new optimization problem based on experimental design that leverages the geometry of the action set to identify a near-optimal hypothesis class.

Experimental Design Model Selection

Nearly Optimal Algorithms for Level Set Estimation

no code implementations2 Nov 2021 Blake Mason, Romain Camilleri, Subhojyoti Mukherjee, Kevin Jamieson, Robert Nowak, Lalit Jain

The threshold value $\alpha$ can either be \emph{explicit} and provided a priori, or \emph{implicit} and defined relative to the optimal function value, i. e. $\alpha = (1-\epsilon)f(x_\ast)$ for a given $\epsilon > 0$ where $f(x_\ast)$ is the maximal function value and is unknown.

Experimental Design

GALAXY: Graph-based Active Learning at the Extreme

1 code implementation3 Feb 2022 Jifan Zhang, Julian Katz-Samuels, Robert Nowak

Active learning is a label-efficient approach to train highly effective models while interactively selecting only small subsets of unlabelled data for labelling and training.

Active Learning

ReVar: Strengthening Policy Evaluation via Reduced Variance Sampling

no code implementations9 Mar 2022 Subhojyoti Mukherjee, Josiah P. Hanna, Robert Nowak

This paper studies the problem of data collection for policy evaluation in Markov decision processes (MDPs).

Efficient Active Learning with Abstention

no code implementations31 Mar 2022 Yinglun Zhu, Robert Nowak

Furthermore, our algorithm is guaranteed to only abstain on hard examples (where the true label distribution is close to a fair coin), a novel property we term \emph{proper abstention} that also leads to a host of other desirable characteristics (e. g., recovering minimax guarantees in the standard setting, and avoiding the undesirable ``noise-seeking'' behavior often seen in active learning).

Active Learning

Fast genomic optical map assembly algorithm using binary representation

no code implementations13 Oct 2022 Przemysław Stawczyk, Robert Nowak

The algorithm consists of several steps, of which the most important are : (1) conversion of the restriction maps into binary strings, (2) detection of overlaps between restriction maps, (3) determining the layout of restriction maps set, (4) creation of consensus genomic maps.

Active Learning with Neural Networks: Insights from Nonparametric Statistics

no code implementations15 Oct 2022 Yinglun Zhu, Robert Nowak

Deep neural networks have great representation power, but typically require large numbers of training examples.

Active Learning

A Fully First-Order Method for Stochastic Bilevel Optimization

no code implementations26 Jan 2023 Jeongyeol Kwon, Dohyun Kwon, Stephen Wright, Robert Nowak

Specifically, we show that F2SA converges to an $\epsilon$-stationary solution of the bilevel problem after $\epsilon^{-7/2}, \epsilon^{-5/2}$, and $\epsilon^{-3/2}$ iterations (each iteration using $O(1)$ samples) when stochastic noises are in both level objectives, only in the upper-level objective, and not present (deterministic settings), respectively.

Bilevel Optimization

SPEED: Experimental Design for Policy Evaluation in Linear Heteroscedastic Bandits

no code implementations29 Jan 2023 Subhojyoti Mukherjee, Qiaomin Xie, Josiah Hanna, Robert Nowak

In this paper, we study the problem of optimal data collection for policy evaluation in linear bandits.

Experimental Design

Feed Two Birds with One Scone: Exploiting Wild Data for Both Out-of-Distribution Generalization and Detection

no code implementations15 Jun 2023 Haoyue Bai, Gregory Canal, Xuefeng Du, Jeongyeol Kwon, Robert Nowak, Yixuan Li

Modern machine learning models deployed in the wild can encounter both covariate and semantic shifts, giving rise to the problems of out-of-distribution (OOD) generalization and OOD detection respectively.

Out-of-Distribution Generalization

On Penalty Methods for Nonconvex Bilevel Optimization and First-Order Stochastic Approximation

no code implementations4 Sep 2023 Jeongyeol Kwon, Dohyun Kwon, Stephen Wright, Robert Nowak

When the perturbed lower-level problem uniformly satisfies the small-error proximal error-bound (EB) condition, we propose a first-order algorithm that converges to an $\epsilon$-stationary point of the penalty function, using in total $O(\epsilon^{-3})$ and $O(\epsilon^{-7})$ accesses to first-order (stochastic) gradient oracles when the oracle is deterministic and oracles are noisy, respectively.

Bilevel Optimization

Looped Transformers are Better at Learning Learning Algorithms

1 code implementation21 Nov 2023 Liu Yang, Kangwook Lee, Robert Nowak, Dimitris Papailiopoulos

Transformers have demonstrated effectiveness in in-context solving data-fitting problems from various (latent) models, as reported by Garg et al.

DIRECT: Deep Active Learning under Imbalance and Label Noise

no code implementations14 Dec 2023 Shyam Nuggehalli, Jifan Zhang, Lalit Jain, Robert Nowak

Our results demonstrate that DIRECT can save more than 60% of the annotation budget compared to state-of-art active learning algorithms and more than 80% of annotation budget compared to random sampling.

Active Learning

Learning from the Best: Active Learning for Wireless Communications

no code implementations23 Jan 2024 Nasim Soltani, Jifan Zhang, Batool Salehi, Debashri Roy, Robert Nowak, Kaushik Chowdhury

We evaluate the performance of different active learning algorithms on a publicly available multi-modal dataset with different modalities including image and LiDAR.

Active Learning

Future Prediction Can be a Strong Evidence of Good History Representation in Partially Observable Environments

no code implementations11 Feb 2024 Jeongyeol Kwon, Liu Yang, Robert Nowak, Josiah Hanna

Then, our main contributions are two-fold: (a) we demonstrate that the performance of reinforcement learning is strongly correlated with the prediction accuracy of future observations in partially observable environments, and (b) our approach can significantly improve the overall end-to-end approach by preventing high-variance noisy signals from reinforcement learning objectives to influence the representation learning.

Future prediction Memorization +3

Cannot find the paper you are looking for? You can Submit a new open access paper.