Search Results for author: Aaditya Ramdas

Found 67 papers, 28 papers with code

Online Control of the False Coverage Rate and False Sign Rate

no code implementations ICML 2020 Asaf Weinstein, Aaditya Ramdas

Here, we consider the general problem of FCR control in the online setting, where there is an infinite sequence of fixed unknown parameters ordered by time.

Prediction Intervals

Faster online calibration without randomization: interval forecasts and the power of two choices

no code implementations27 Apr 2022 Chirag Gupta, Aaditya Ramdas

We study the problem of making calibrated probabilistic forecasts for a binary sequence generated by an adversarial nature.

Fully Adaptive Composition in Differential Privacy

no code implementations10 Mar 2022 Justin Whitehouse, Aaditya Ramdas, Ryan Rogers, Zhiwei Steven Wu

We construct filters that match the tightness of advanced composition, including constants, despite allowing for adaptively chosen privacy parameters.

E-detectors: a nonparametric framework for online changepoint detection

no code implementations7 Mar 2022 Jaehyeok Shin, Aaditya Ramdas, Alessandro Rinaldo

Sequential changepoint detection is a classical problem with a variety of applications.

Locally private nonparametric confidence intervals and sequences

no code implementations17 Feb 2022 Ian Waudby-Smith, Zhiwei Steven Wu, Aaditya Ramdas

This work derives methods for performing nonparametric, nonasymptotic statistical inference for population parameters under the constraint of local differential privacy (LDP).

Data blurring: sample splitting a single sample

no code implementations21 Dec 2021 James Leiner, Boyan Duan, Larry Wasserman, Aaditya Ramdas

Rasines and Young (2021) offers an alternative route of accomplishing this task through randomization of $X$ with additive Gaussian noise which enables post-selection inference in finite samples for Gaussian distributed data and asymptotically for non-Gaussian additive models.

Additive models Bayesian Inference

Best Arm Identification under Additive Transfer Bandits

no code implementations8 Dec 2021 Ojash Neopane, Aaditya Ramdas, Aarti Singh

We consider a variant of the best arm identification (BAI) problem in multi-armed bandits (MAB) in which there are two sets of arms (source and target), and the objective is to determine the best target arm while only pulling source arms.

Multi-Armed Bandits Transfer Learning

Universal Inference Meets Random Projections: A Scalable Test for Log-concavity

1 code implementation17 Nov 2021 Robin Dunn, Larry Wasserman, Aaditya Ramdas

Shape constraints yield flexible middle grounds between fully nonparametric and fully parametric approaches to modeling distributions of data.

Tracking the risk of a deployed model and detecting harmful distribution shifts

no code implementations ICLR 2022 Aleksandr Podkopaev, Aaditya Ramdas

When deployed in the real world, machine learning models inevitably encounter changes in the data distribution, and certain -- but not all -- distribution shifts could result in significant performance degradation.

Comparing Sequential Forecasters

1 code implementation30 Sep 2021 Yo Joong Choe, Aaditya Ramdas

Consider two or more forecasters, each making a sequence of predictions for different events over time.

A unified framework for bandit multiple testing

1 code implementation NeurIPS 2021 Ziyu Xu, Ruodu Wang, Aaditya Ramdas

In bandit multiple hypothesis testing, each arm corresponds to a different null hypothesis that we wish to test, and the goal is to design adaptive algorithms that correctly identify large set of interesting arms (true discoveries), while only mistakenly identifying a few uninteresting ones (false discoveries).

Sequential Estimation of Convex Functionals and Divergences

no code implementations16 Mar 2021 Tudor Manole, Aaditya Ramdas

We present a unified technique for sequential estimation of convex divergences between distributions, including integral probability metrics like the kernel maximum mean discrepancy, $\varphi$-divergences like the Kullback-Leibler divergence, and optimal transport costs, such as powers of Wasserstein distances.

Time-uniform central limit theory with applications to anytime-valid causal inference

2 code implementations11 Mar 2021 Ian Waudby-Smith, David Arbour, Ritwik Sinha, Edward H. Kennedy, Aaditya Ramdas

Our methods take the form of confidence sequences (CS) -- sequences of confidence intervals that are uniformly valid over time.

Causal Inference

Distribution-free uncertainty quantification for classification under label shift

no code implementations4 Mar 2021 Aleksandr Podkopaev, Aaditya Ramdas

Piggybacking on recent progress in addressing label shift (for better prediction), we examine the right way to achieve UQ by reweighting the aforementioned conformal and calibration procedures whenever some unlabeled data from the target distribution is available.

Classification General Classification

Large-scale simultaneous inference under dependence

no code implementations22 Feb 2021 Jinjin Tian, Xu Chen, Eugene Katsevich, Jelle Goeman, Aaditya Ramdas

Simultaneous inference allows for the exploration of data while deciding on criteria for proclaiming discoveries.

Statistics Theory Methodology Statistics Theory

Off-policy Confidence Sequences

no code implementations18 Feb 2021 Nikos Karampatziakis, Paul Mineiro, Aaditya Ramdas

We develop confidence bounds that hold uniformly over time for off-policy evaluation in the contextual bandit setting.

Dimension-agnostic inference

no code implementations10 Nov 2020 Ilmun Kim, Aaditya Ramdas

This often leads to different inference procedures, depending on the assumptions about the dimensionality, leaving the practitioner in a bind: given a dataset with 100 samples in 20 dimensions, should they calibrate by assuming $n \gg d$, or $d_n/n \approx 0. 2$?

Two-sample testing

Dynamic Algorithms for Online Multiple Testing

1 code implementation26 Oct 2020 Ziyu Xu, Aaditya Ramdas

This statistical advance is enabled by the development of new algorithmic ideas: earlier algorithms are more "static" while our new ones allow for the dynamical adjustment of testing levels based on the amount of wealth the algorithm has accumulated.

Estimating means of bounded random variables by betting

2 code implementations19 Oct 2020 Ian Waudby-Smith, Aaditya Ramdas

This paper derives confidence intervals (CI) and time-uniform confidence sequences (CS) for the classical problem of estimating an unknown mean from bounded observations.

Distribution-free binary classification: prediction sets, confidence intervals and calibration

1 code implementation NeurIPS 2020 Chirag Gupta, Aleksandr Podkopaev, Aaditya Ramdas

We study three notions of uncertainty quantification -- calibration, confidence intervals and prediction sets -- for binary classification in the distribution-free setting, that is without making any distributional assumptions on the data.

General Classification

The leave-one-covariate-out conditional randomization test

1 code implementation15 Jun 2020 Eugene Katsevich, Aaditya Ramdas

Conditional independence testing is an important problem, yet provably hard without assumptions.

Uncertainty quantification using martingales for misspecified Gaussian processes

1 code implementation12 Jun 2020 Willie Neiswanger, Aaditya Ramdas

There is a necessary cost to achieving robustness: if the prior was correct, posterior GP bands are narrower than our CS.

Gaussian Processes

Confidence sequences for sampling without replacement

3 code implementations NeurIPS 2020 Ian Waudby-Smith, Aaditya Ramdas

We then present Hoeffding- and empirical-Bernstein-type time-uniform CSs and fixed-time confidence intervals for sampling WoR, which improve on previous bounds in the literature and explicitly quantify the benefit of WoR sampling.

Fast and Powerful Conditional Randomization Testing via Distillation

1 code implementation6 Jun 2020 Molei Liu, Eugene Katsevich, Lucas Janson, Aaditya Ramdas

We propose the distilled CRT, a novel approach to using state-of-the-art machine learning algorithms in the CRT while drastically reducing the number of times those algorithms need to be run, thereby taking advantage of their power and the CRT's statistical guarantees without suffering the usual computational expense.

Methodology

On the power of conditional independence testing under model-X

1 code implementation12 May 2020 Eugene Katsevich, Aaditya Ramdas

For testing conditional independence (CI) of a response Y and a predictor X given covariates Z, the recently introduced model-X (MX) framework has been the subject of active methodological research, especially in the context of MX knockoffs and their successful application to genome-wide association studies.

Causal Inference

Familywise Error Rate Control by Interactive Unmasking

1 code implementation ICML 2020 Boyan Duan, Aaditya Ramdas, Larry Wasserman

We propose a method for multiple hypothesis testing with familywise error rate (FWER) control, called the i-FWER test.

Methodology

On conditional versus marginal bias in multi-armed bandits

no code implementations ICML 2020 Jaehyeok Shin, Aaditya Ramdas, Alessandro Rinaldo

The bias of the sample means of the arms in multi-armed bandits is an important issue in adaptive data analysis that has recently received considerable attention in the literature.

Multi-Armed Bandits

Universal Inference

no code implementations24 Dec 2019 Larry Wasserman, Aaditya Ramdas, Sivaraman Balakrishnan

Constructing tests and confidence sets for such models is notoriously difficult.

The Power of Batching in Multiple Hypothesis Testing

no code implementations11 Oct 2019 Tijana Zrnic, Daniel L. Jiang, Aaditya Ramdas, Michael. I. Jordan

One important partition of algorithms for controlling the false discovery rate (FDR) in multiple testing is into offline and online algorithms.

Two-sample testing

Online control of the familywise error rate

1 code implementation10 Oct 2019 Jinjin Tian, Aaditya Ramdas

Biological research often involves testing a growing number of null hypotheses as new data is accumulated over time.

Path Length Bounds for Gradient Descent and Flow

no code implementations2 Aug 2019 Chirag Gupta, Sivaraman Balakrishnan, Aaditya Ramdas

We derive bounds on the path length $\zeta$ of gradient descent (GD) and gradient flow (GF) curves for various classes of smooth convex and nonconvex functions.

Sequential estimation of quantiles with applications to A/B-testing and best-arm identification

4 code implementations24 Jun 2019 Steven R. Howard, Aaditya Ramdas

We propose confidence sequences -- sequences of confidence intervals which are valid uniformly over time -- for quantiles of any distribution over a complete, fully-ordered set, based on a stream of i. i. d.

Are sample means in multi-armed bandits positively or negatively biased?

no code implementations NeurIPS 2019 Jaehyeok Shin, Aaditya Ramdas, Alessandro Rinaldo

It is well known that in stochastic multi-armed bandits (MAB), the sample mean of an arm is typically not an unbiased estimator of its true mean.

Multi-Armed Bandits Selection bias

Predictive inference with the jackknife+

no code implementations8 May 2019 Rina Foygel Barber, Emmanuel J. Candes, Aaditya Ramdas, Ryan J. Tibshirani

This paper introduces the jackknife+, which is a novel method for constructing predictive confidence intervals.

Methodology

Conformal Prediction Under Covariate Shift

1 code implementation NeurIPS 2019 Rina Foygel Barber, Emmanuel J. Candes, Aaditya Ramdas, Ryan J. Tibshirani

We extend conformal prediction methodology beyond the case of exchangeable data.

Methodology

A Higher-Order Kolmogorov-Smirnov Test

no code implementations24 Mar 2019 Veeranjaneyulu Sadhanala, Yu-Xiang Wang, Aaditya Ramdas, Ryan J. Tibshirani

We present an extension of the Kolmogorov-Smirnov (KS) two-sample test, which can be more sensitive to differences in the tails.

The limits of distribution-free conditional predictive inference

no code implementations12 Mar 2019 Rina Foygel Barber, Emmanuel J. Candès, Aaditya Ramdas, Ryan J. Tibshirani

We consider the problem of distribution-free predictive inference, with the goal of producing predictive coverage guarantees that hold conditionally rather than marginally.

Statistics Theory Statistics Theory

Asynchronous Online Testing of Multiple Hypotheses

2 code implementations12 Dec 2018 Tijana Zrnic, Aaditya Ramdas, Michael. I. Jordan

We consider the problem of asynchronous online testing, aimed at providing control of the false discovery rate (FDR) during a continual stream of data collection and testing, where each test may be a sequential test that can start and stop at arbitrary times.

Time-uniform, nonparametric, nonasymptotic confidence sequences

4 code implementations18 Oct 2018 Steven R. Howard, Aaditya Ramdas, Jon McAuliffe, Jasjeet Sekhon

A confidence sequence is a sequence of confidence intervals that is uniformly valid over an unbounded time horizon.

Statistics Theory Probability Methodology Statistics Theory

Towards "simultaneous selective inference": post-hoc bounds on the false discovery proportion

1 code implementation19 Mar 2018 Eugene Katsevich, Aaditya Ramdas

In this paper, we show that the entire path of rejection sets considered by a variety of existing FDR procedures (like BH, knockoffs, and many others) can be endowed with simultaneous high-probability bounds on FDP.

Statistics Theory Statistics Theory

SAFFRON: an adaptive algorithm for online control of the false discovery rate

1 code implementation ICML 2018 Aaditya Ramdas, Tijana Zrnic, Martin Wainwright, Michael Jordan

However, unlike older methods, SAFFRON's threshold sequence is based on a novel estimate of the alpha fraction that it allocates to true null hypotheses.

Online control of the false discovery rate with decaying memory

1 code implementation NeurIPS 2017 Aaditya Ramdas, Fanny Yang, Martin J. Wainwright, Michael. I. Jordan

In the online multiple testing problem, p-values corresponding to different null hypotheses are observed one by one, and the decision of whether or not to reject the current hypothesis must be made immediately, after which the next p-value is observed.

Unity

DAGGER: A sequential algorithm for FDR control on DAGs

1 code implementation29 Sep 2017 Aaditya Ramdas, Jianbo Chen, Martin J. Wainwright, Michael. I. Jordan

We propose a linear-time, single-pass, top-down algorithm for multiple testing on directed acyclic graphs (DAGs), where nodes represent hypotheses and edges specify a partial ordering in which hypotheses must be tested.

Model Selection

A framework for Multi-A(rmed)/B(andit) testing with online FDR control

1 code implementation NeurIPS 2017 Fanny Yang, Aaditya Ramdas, Kevin Jamieson, Martin J. Wainwright

We propose an alternative framework to existing setups for controlling false alarms when multiple A/B tests are run over time.

A unified treatment of multiple testing with prior knowledge using the p-filter

no code implementations18 Mar 2017 Aaditya Ramdas, Rina Foygel Barber, Martin J. Wainwright, Michael. I. Jordan

There is a significant literature on methods for incorporating knowledge into multiple testing procedures so as to improve their power and precision.

Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy

1 code implementation14 Nov 2016 Danica J. Sutherland, Hsiao-Yu Tung, Heiko Strathmann, Soumyajit De, Aaditya Ramdas, Alex Smola, Arthur Gretton

In this context, the MMD may be used in two roles: first, as a discriminator, either directly on the samples, or on features of the samples.

Function-Specific Mixing Times and Concentration Away from Equilibrium

no code implementations6 May 2016 Maxim Rabinovich, Aaditya Ramdas, Michael. I. Jordan, Martin J. Wainwright

These results show that it is possible for empirical expectations of functions to concentrate long before the underlying chain has mixed in the classical sense, and we show that the concentration rates we achieve are optimal up to constants.

On kernel methods for covariates that are rankings

no code implementations25 Mar 2016 Horia Mania, Aaditya Ramdas, Martin J. Wainwright, Michael. I. Jordan, Benjamin Recht

This paper studies the use of reproducing kernel Hilbert space methods for learning from permutation-valued features.

Asymptotic behavior of $\ell_p$-based Laplacian regularization in semi-supervised learning

no code implementations2 Mar 2016 Ahmed El Alaoui, Xiang Cheng, Aaditya Ramdas, Martin J. Wainwright, Michael. I. Jordan

Together, these properties show that $p = d+1$ is an optimal choice, yielding a function estimate $\hat{f}$ that is both smooth and non-degenerate, while remaining maximally sensitive to $P$.

Classification accuracy as a proxy for two sample testing

no code implementations6 Feb 2016 Ilmun Kim, Aaditya Ramdas, Aarti Singh, Larry Wasserman

We prove two results that hold for all classifiers in any dimensions: if its true error remains $\epsilon$-better than chance for some $\epsilon>0$ as $d, n \to \infty$, then (a) the permutation-based test is consistent (has power approaching to one), (b) a computationally efficient test based on a Gaussian approximation of the null distribution is also consistent.

Classification General Classification +1

Minimax Lower Bounds for Linear Independence Testing

no code implementations23 Jan 2016 Aaditya Ramdas, David Isenberg, Aarti Singh, Larry Wasserman

Linear independence testing is a fundamental information-theoretic and statistical problem that can be posed as follows: given $n$ points $\{(X_i, Y_i)\}^n_{i=1}$ from a $p+q$ dimensional multivariate distribution where $X_i \in \mathbb{R}^p$ and $Y_i \in\mathbb{R}^q$, determine whether $a^T X$ and $b^T Y$ are uncorrelated for every $a \in \mathbb{R}^p, b\in \mathbb{R}^q$ or not.

Two-sample testing

The p-filter: multi-layer FDR control for grouped hypotheses

no code implementations10 Dec 2015 Rina Foygel Barber, Aaditya Ramdas

In many practical applications of multiple hypothesis testing using the False Discovery Rate (FDR), the given hypotheses can be naturally partitioned into groups, and one may not only want to control the number of false discoveries (wrongly rejected null hypotheses), but also the number of falsely discovered groups of hypotheses (we say a group is falsely discovered if at least one hypothesis within that group is rejected, when in reality the group contains only nulls).

Two-sample testing

On Wasserstein Two Sample Testing and Related Families of Nonparametric Tests

1 code implementation8 Sep 2015 Aaditya Ramdas, Nicolas Garcia, Marco Cuturi

In this work, our central object is the Wasserstein distance, as we form a chain of connections from univariate methods like the Kolmogorov-Smirnov test, PP/QQ plots and ROC/ODC curves, to multivariate tests involving energy statistics and kernel based maximum mean discrepancy.

Two-sample testing

Adaptivity and Computation-Statistics Tradeoffs for Kernel and Distance based High Dimensional Two Sample Testing

no code implementations4 Aug 2015 Aaditya Ramdas, Sashank J. Reddi, Barnabas Poczos, Aarti Singh, Larry Wasserman

We formally characterize the power of popular tests for GDA like the Maximum Mean Discrepancy with the Gaussian kernel (gMMD) and bandwidth-dependent variants of the Energy Distance with the Euclidean norm (eED) in the high-dimensional MDA regime.

Two-sample testing

Fast Two-Sample Testing with Analytic Representations of Probability Measures

1 code implementation NeurIPS 2015 Kacper Chwialkowski, Aaditya Ramdas, Dino Sejdinovic, Arthur Gretton

The new tests are consistent against a larger class of alternatives than the previous linear-time tests based on the (non-smoothed) empirical characteristic functions, while being much faster than the current state-of-the-art quadratic-time kernel-based or energy distance-based tests.

Two-sample testing

Sequential Nonparametric Testing with the Law of the Iterated Logarithm

no code implementations10 Jun 2015 Akshay Balsubramani, Aaditya Ramdas

It is novel in several ways: (a) it takes linear time and constant space to compute on the fly, (b) it has the same power guarantee as a non-sequential version of the test with the same computational constraints up to a small factor, and (c) it accesses only as many samples as are required - its stopping time adapts to the unknown difficulty of the problem.

Two-sample testing

Algorithmic Connections Between Active Learning and Stochastic Convex Optimization

no code implementations15 May 2015 Aaditya Ramdas, Aarti Singh

Combining these two parts yields an algorithm that solves stochastic convex optimization of uniformly convex and smooth functions using only noisy gradient signs by repeatedly performing active learning, achieves optimal rates and is adaptive to all unknown convexity and smoothness parameters.

Active Learning

Margins, Kernels and Non-linear Smoothed Perceptrons

no code implementations15 May 2015 Aaditya Ramdas, Javier Peña

This allows us to give guarantees for a primal-dual algorithm that halts in $\min\{\tfrac{\sqrt n}{|\rho|}, \tfrac{\sqrt n}{\epsilon}\}$ iterations with a perfect separator in the RKHS if the primal is feasible or a dual $\epsilon$-certificate of near-infeasibility.

An Analysis of Active Learning With Uniform Feature Noise

no code implementations15 May 2015 Aaditya Ramdas, Barnabas Poczos, Aarti Singh, Larry Wasserman

For larger $\sigma$, the \textit{unflattening} of the regression function on convolution with uniform noise, along with its local antisymmetry around the threshold, together yield a behaviour where noise \textit{appears} to be beneficial.

Active Learning

On the High-dimensional Power of Linear-time Kernel Two-Sample Testing under Mean-difference Alternatives

no code implementations23 Nov 2014 Aaditya Ramdas, Sashank J. Reddi, Barnabas Poczos, Aarti Singh, Larry Wasserman

The current literature is split into two kinds of tests - those which are consistent without any assumptions about how the distributions may differ (\textit{general} alternatives), and those which are designed to specifically test easier alternatives, like a difference in means (\textit{mean-shift} alternatives).

Two-sample testing

Rows vs Columns for Linear Systems of Equations - Randomized Kaczmarz or Coordinate Descent?

no code implementations20 Jun 2014 Aaditya Ramdas

This paper is about randomized iterative algorithms for solving a linear system of equations $X \beta = y$ in different settings.

Towards A Deeper Geometric, Analytic and Algorithmic Understanding of Margins

no code implementations20 Jun 2014 Aaditya Ramdas, Javier Peña

Given a matrix $A$, a linear feasibility problem (of which linear classification is a special case) aims to find a solution to a primal problem $w: A^Tw > \textbf{0}$ or a certificate for the dual problem which is a probability distribution $p: Ap = \textbf{0}$.

Fast and Flexible ADMM Algorithms for Trend Filtering

4 code implementations9 Jun 2014 Aaditya Ramdas, Ryan J. Tibshirani

This paper presents a fast and robust algorithm for trend filtering, a recently developed nonparametric regression tool.

Nonparametric Independence Testing for Small Sample Sizes

no code implementations7 Jun 2014 Aaditya Ramdas, Leila Wehbe

This paper deals with the problem of nonparametric independence testing, a fundamental decision-theoretic problem that asks if two arbitrary (possibly multivariate) random variables $X, Y$ are independent or not, a question that comes up in many fields like causality and neuroscience.

Cannot find the paper you are looking for? You can Submit a new open access paper.