You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • ICML 2020 • Asaf Weinstein, Aaditya Ramdas

Here, we consider the general problem of FCR control in the online setting, where there is an infinite sequence of fixed unknown parameters ordered by time.

no code implementations • 27 Apr 2022 • Chirag Gupta, Aaditya Ramdas

We study the problem of making calibrated probabilistic forecasts for a binary sequence generated by an adversarial nature.

no code implementations • 10 Mar 2022 • Justin Whitehouse, Aaditya Ramdas, Ryan Rogers, Zhiwei Steven Wu

We construct filters that match the tightness of advanced composition, including constants, despite allowing for adaptively chosen privacy parameters.

no code implementations • 7 Mar 2022 • Jaehyeok Shin, Aaditya Ramdas, Alessandro Rinaldo

Sequential changepoint detection is a classical problem with a variety of applications.

no code implementations • 17 Feb 2022 • Ian Waudby-Smith, Zhiwei Steven Wu, Aaditya Ramdas

This work derives methods for performing nonparametric, nonasymptotic statistical inference for population parameters under the constraint of local differential privacy (LDP).

no code implementations • 21 Dec 2021 • James Leiner, Boyan Duan, Larry Wasserman, Aaditya Ramdas

Rasines and Young (2021) offers an alternative route of accomplishing this task through randomization of $X$ with additive Gaussian noise which enables post-selection inference in finite samples for Gaussian distributed data and asymptotically for non-Gaussian additive models.

no code implementations • 8 Dec 2021 • Ojash Neopane, Aaditya Ramdas, Aarti Singh

We consider a variant of the best arm identification (BAI) problem in multi-armed bandits (MAB) in which there are two sets of arms (source and target), and the objective is to determine the best target arm while only pulling source arms.

1 code implementation • 17 Nov 2021 • Robin Dunn, Larry Wasserman, Aaditya Ramdas

Shape constraints yield flexible middle grounds between fully nonparametric and fully parametric approaches to modeling distributions of data.

no code implementations • ICLR 2022 • Aleksandr Podkopaev, Aaditya Ramdas

When deployed in the real world, machine learning models inevitably encounter changes in the data distribution, and certain -- but not all -- distribution shifts could result in significant performance degradation.

1 code implementation • 30 Sep 2021 • Yo Joong Choe, Aaditya Ramdas

Consider two or more forecasters, each making a sequence of predictions for different events over time.

1 code implementation • NeurIPS 2021 • Ziyu Xu, Ruodu Wang, Aaditya Ramdas

In bandit multiple hypothesis testing, each arm corresponds to a different null hypothesis that we wish to test, and the goal is to design adaptive algorithms that correctly identify large set of interesting arms (true discoveries), while only mistakenly identifying a few uninteresting ones (false discoveries).

no code implementations • 16 Mar 2021 • Tudor Manole, Aaditya Ramdas

We present a unified technique for sequential estimation of convex divergences between distributions, including integral probability metrics like the kernel maximum mean discrepancy, $\varphi$-divergences like the Kullback-Leibler divergence, and optimal transport costs, such as powers of Wasserstein distances.

2 code implementations • 11 Mar 2021 • Ian Waudby-Smith, David Arbour, Ritwik Sinha, Edward H. Kennedy, Aaditya Ramdas

Our methods take the form of confidence sequences (CS) -- sequences of confidence intervals that are uniformly valid over time.

no code implementations • 4 Mar 2021 • Aleksandr Podkopaev, Aaditya Ramdas

Piggybacking on recent progress in addressing label shift (for better prediction), we examine the right way to achieve UQ by reweighting the aforementioned conformal and calibration procedures whenever some unlabeled data from the target distribution is available.

no code implementations • 22 Feb 2021 • Jinjin Tian, Xu Chen, Eugene Katsevich, Jelle Goeman, Aaditya Ramdas

Simultaneous inference allows for the exploration of data while deciding on criteria for proclaiming discoveries.

Statistics Theory Methodology Statistics Theory

no code implementations • 18 Feb 2021 • Nikos Karampatziakis, Paul Mineiro, Aaditya Ramdas

We develop confidence bounds that hold uniformly over time for off-policy evaluation in the contextual bandit setting.

no code implementations • 10 Nov 2020 • Ilmun Kim, Aaditya Ramdas

This often leads to different inference procedures, depending on the assumptions about the dimensionality, leaving the practitioner in a bind: given a dataset with 100 samples in 20 dimensions, should they calibrate by assuming $n \gg d$, or $d_n/n \approx 0. 2$?

1 code implementation • 26 Oct 2020 • Ziyu Xu, Aaditya Ramdas

This statistical advance is enabled by the development of new algorithmic ideas: earlier algorithms are more "static" while our new ones allow for the dynamical adjustment of testing levels based on the amount of wealth the algorithm has accumulated.

2 code implementations • 19 Oct 2020 • Ian Waudby-Smith, Aaditya Ramdas

This paper derives confidence intervals (CI) and time-uniform confidence sequences (CS) for the classical problem of estimating an unknown mean from bounded observations.

1 code implementation • NeurIPS 2020 • Chirag Gupta, Aleksandr Podkopaev, Aaditya Ramdas

We study three notions of uncertainty quantification -- calibration, confidence intervals and prediction sets -- for binary classification in the distribution-free setting, that is without making any distributional assumptions on the data.

1 code implementation • 15 Jun 2020 • Eugene Katsevich, Aaditya Ramdas

Conditional independence testing is an important problem, yet provably hard without assumptions.

1 code implementation • 12 Jun 2020 • Willie Neiswanger, Aaditya Ramdas

There is a necessary cost to achieving robustness: if the prior was correct, posterior GP bands are narrower than our CS.

3 code implementations • NeurIPS 2020 • Ian Waudby-Smith, Aaditya Ramdas

We then present Hoeffding- and empirical-Bernstein-type time-uniform CSs and fixed-time confidence intervals for sampling WoR, which improve on previous bounds in the literature and explicitly quantify the benefit of WoR sampling.

1 code implementation • 6 Jun 2020 • Molei Liu, Eugene Katsevich, Lucas Janson, Aaditya Ramdas

We propose the distilled CRT, a novel approach to using state-of-the-art machine learning algorithms in the CRT while drastically reducing the number of times those algorithms need to be run, thereby taking advantage of their power and the CRT's statistical guarantees without suffering the usual computational expense.

Methodology

1 code implementation • 12 May 2020 • Eugene Katsevich, Aaditya Ramdas

For testing conditional independence (CI) of a response Y and a predictor X given covariates Z, the recently introduced model-X (MX) framework has been the subject of active methodological research, especially in the context of MX knockoffs and their successful application to genome-wide association studies.

1 code implementation • ICML 2020 • Boyan Duan, Aaditya Ramdas, Larry Wasserman

We propose a method for multiple hypothesis testing with familywise error rate (FWER) control, called the i-FWER test.

Methodology

no code implementations • ICML 2020 • Jaehyeok Shin, Aaditya Ramdas, Alessandro Rinaldo

The bias of the sample means of the arms in multi-armed bandits is an important issue in adaptive data analysis that has recently received considerable attention in the literature.

no code implementations • 24 Dec 2019 • Larry Wasserman, Aaditya Ramdas, Sivaraman Balakrishnan

Constructing tests and confidence sets for such models is notoriously difficult.

no code implementations • 11 Oct 2019 • Tijana Zrnic, Daniel L. Jiang, Aaditya Ramdas, Michael. I. Jordan

One important partition of algorithms for controlling the false discovery rate (FDR) in multiple testing is into offline and online algorithms.

1 code implementation • 10 Oct 2019 • Jinjin Tian, Aaditya Ramdas

Biological research often involves testing a growing number of null hypotheses as new data is accumulated over time.

no code implementations • 2 Aug 2019 • Chirag Gupta, Sivaraman Balakrishnan, Aaditya Ramdas

We derive bounds on the path length $\zeta$ of gradient descent (GD) and gradient flow (GF) curves for various classes of smooth convex and nonconvex functions.

4 code implementations • 24 Jun 2019 • Steven R. Howard, Aaditya Ramdas

We propose confidence sequences -- sequences of confidence intervals which are valid uniformly over time -- for quantiles of any distribution over a complete, fully-ordered set, based on a stream of i. i. d.

no code implementations • NeurIPS 2019 • Jaehyeok Shin, Aaditya Ramdas, Alessandro Rinaldo

It is well known that in stochastic multi-armed bandits (MAB), the sample mean of an arm is typically not an unbiased estimator of its true mean.

1 code implementation • NeurIPS 2019 • Jinjin Tian, Aaditya Ramdas

Major internet companies routinely perform tens of thousands of A/B tests each year.

no code implementations • 8 May 2019 • Rina Foygel Barber, Emmanuel J. Candes, Aaditya Ramdas, Ryan J. Tibshirani

This paper introduces the jackknife+, which is a novel method for constructing predictive confidence intervals.

Methodology

1 code implementation • NeurIPS 2019 • Rina Foygel Barber, Emmanuel J. Candes, Aaditya Ramdas, Ryan J. Tibshirani

We extend conformal prediction methodology beyond the case of exchangeable data.

Methodology

no code implementations • 24 Mar 2019 • Veeranjaneyulu Sadhanala, Yu-Xiang Wang, Aaditya Ramdas, Ryan J. Tibshirani

We present an extension of the Kolmogorov-Smirnov (KS) two-sample test, which can be more sensitive to differences in the tails.

no code implementations • 12 Mar 2019 • Rina Foygel Barber, Emmanuel J. Candès, Aaditya Ramdas, Ryan J. Tibshirani

We consider the problem of distribution-free predictive inference, with the goal of producing predictive coverage guarantees that hold conditionally rather than marginally.

Statistics Theory Statistics Theory

no code implementations • 2 Feb 2019 • Jaehyeok Shin, Aaditya Ramdas, Alessandro Rinaldo

For example, when is it consistent, how large is its bias, and can we bound its mean squared error?

2 code implementations • 12 Dec 2018 • Tijana Zrnic, Aaditya Ramdas, Michael. I. Jordan

We consider the problem of asynchronous online testing, aimed at providing control of the false discovery rate (FDR) during a continual stream of data collection and testing, where each test may be a sequential test that can start and stop at arbitrary times.

4 code implementations • 18 Oct 2018 • Steven R. Howard, Aaditya Ramdas, Jon McAuliffe, Jasjeet Sekhon

A confidence sequence is a sequence of confidence intervals that is uniformly valid over an unbounded time horizon.

Statistics Theory Probability Methodology Statistics Theory

1 code implementation • 19 Mar 2018 • Eugene Katsevich, Aaditya Ramdas

In this paper, we show that the entire path of rejection sets considered by a variety of existing FDR procedures (like BH, knockoffs, and many others) can be endowed with simultaneous high-probability bounds on FDP.

Statistics Theory Statistics Theory

1 code implementation • ICML 2018 • Aaditya Ramdas, Tijana Zrnic, Martin Wainwright, Michael Jordan

However, unlike older methods, SAFFRON's threshold sequence is based on a novel estimate of the alpha fraction that it allocates to true null hypotheses.

1 code implementation • NeurIPS 2017 • Aaditya Ramdas, Fanny Yang, Martin J. Wainwright, Michael. I. Jordan

In the online multiple testing problem, p-values corresponding to different null hypotheses are observed one by one, and the decision of whether or not to reject the current hypothesis must be made immediately, after which the next p-value is observed.

1 code implementation • 29 Sep 2017 • Aaditya Ramdas, Jianbo Chen, Martin J. Wainwright, Michael. I. Jordan

We propose a linear-time, single-pass, top-down algorithm for multiple testing on directed acyclic graphs (DAGs), where nodes represent hypotheses and edges specify a partial ordering in which hypotheses must be tested.

1 code implementation • NeurIPS 2017 • Fanny Yang, Aaditya Ramdas, Kevin Jamieson, Martin J. Wainwright

We propose an alternative framework to existing setups for controlling false alarms when multiple A/B tests are run over time.

no code implementations • 18 Mar 2017 • Aaditya Ramdas, Rina Foygel Barber, Martin J. Wainwright, Michael. I. Jordan

There is a significant literature on methods for incorporating knowledge into multiple testing procedures so as to improve their power and precision.

1 code implementation • 14 Nov 2016 • Danica J. Sutherland, Hsiao-Yu Tung, Heiko Strathmann, Soumyajit De, Aaditya Ramdas, Alex Smola, Arthur Gretton

In this context, the MMD may be used in two roles: first, as a discriminator, either directly on the samples, or on features of the samples.

no code implementations • 6 May 2016 • Maxim Rabinovich, Aaditya Ramdas, Michael. I. Jordan, Martin J. Wainwright

These results show that it is possible for empirical expectations of functions to concentrate long before the underlying chain has mixed in the classical sense, and we show that the concentration rates we achieve are optimal up to constants.

no code implementations • 25 Mar 2016 • Horia Mania, Aaditya Ramdas, Martin J. Wainwright, Michael. I. Jordan, Benjamin Recht

This paper studies the use of reproducing kernel Hilbert space methods for learning from permutation-valued features.

no code implementations • 2 Mar 2016 • Ahmed El Alaoui, Xiang Cheng, Aaditya Ramdas, Martin J. Wainwright, Michael. I. Jordan

Together, these properties show that $p = d+1$ is an optimal choice, yielding a function estimate $\hat{f}$ that is both smooth and non-degenerate, while remaining maximally sensitive to $P$.

no code implementations • 6 Feb 2016 • Ilmun Kim, Aaditya Ramdas, Aarti Singh, Larry Wasserman

We prove two results that hold for all classifiers in any dimensions: if its true error remains $\epsilon$-better than chance for some $\epsilon>0$ as $d, n \to \infty$, then (a) the permutation-based test is consistent (has power approaching to one), (b) a computationally efficient test based on a Gaussian approximation of the null distribution is also consistent.

no code implementations • 23 Jan 2016 • Aaditya Ramdas, David Isenberg, Aarti Singh, Larry Wasserman

Linear independence testing is a fundamental information-theoretic and statistical problem that can be posed as follows: given $n$ points $\{(X_i, Y_i)\}^n_{i=1}$ from a $p+q$ dimensional multivariate distribution where $X_i \in \mathbb{R}^p$ and $Y_i \in\mathbb{R}^q$, determine whether $a^T X$ and $b^T Y$ are uncorrelated for every $a \in \mathbb{R}^p, b\in \mathbb{R}^q$ or not.

no code implementations • 10 Dec 2015 • Rina Foygel Barber, Aaditya Ramdas

In many practical applications of multiple hypothesis testing using the False Discovery Rate (FDR), the given hypotheses can be naturally partitioned into groups, and one may not only want to control the number of false discoveries (wrongly rejected null hypotheses), but also the number of falsely discovered groups of hypotheses (we say a group is falsely discovered if at least one hypothesis within that group is rejected, when in reality the group contains only nulls).

1 code implementation • 8 Sep 2015 • Aaditya Ramdas, Nicolas Garcia, Marco Cuturi

In this work, our central object is the Wasserstein distance, as we form a chain of connections from univariate methods like the Kolmogorov-Smirnov test, PP/QQ plots and ROC/ODC curves, to multivariate tests involving energy statistics and kernel based maximum mean discrepancy.

no code implementations • 4 Aug 2015 • Aaditya Ramdas, Sashank J. Reddi, Barnabas Poczos, Aarti Singh, Larry Wasserman

We formally characterize the power of popular tests for GDA like the Maximum Mean Discrepancy with the Gaussian kernel (gMMD) and bandwidth-dependent variants of the Energy Distance with the Euclidean norm (eED) in the high-dimensional MDA regime.

1 code implementation • NeurIPS 2015 • Kacper Chwialkowski, Aaditya Ramdas, Dino Sejdinovic, Arthur Gretton

The new tests are consistent against a larger class of alternatives than the previous linear-time tests based on the (non-smoothed) empirical characteristic functions, while being much faster than the current state-of-the-art quadratic-time kernel-based or energy distance-based tests.

no code implementations • 10 Jun 2015 • Akshay Balsubramani, Aaditya Ramdas

It is novel in several ways: (a) it takes linear time and constant space to compute on the fly, (b) it has the same power guarantee as a non-sequential version of the test with the same computational constraints up to a small factor, and (c) it accesses only as many samples as are required - its stopping time adapts to the unknown difficulty of the problem.

no code implementations • 15 May 2015 • Aaditya Ramdas, Aarti Singh

Combining these two parts yields an algorithm that solves stochastic convex optimization of uniformly convex and smooth functions using only noisy gradient signs by repeatedly performing active learning, achieves optimal rates and is adaptive to all unknown convexity and smoothness parameters.

no code implementations • 15 May 2015 • Aaditya Ramdas, Javier Peña

This allows us to give guarantees for a primal-dual algorithm that halts in $\min\{\tfrac{\sqrt n}{|\rho|}, \tfrac{\sqrt n}{\epsilon}\}$ iterations with a perfect separator in the RKHS if the primal is feasible or a dual $\epsilon$-certificate of near-infeasibility.

no code implementations • 15 May 2015 • Aaditya Ramdas, Barnabas Poczos, Aarti Singh, Larry Wasserman

For larger $\sigma$, the \textit{unflattening} of the regression function on convolution with uniform noise, along with its local antisymmetry around the threshold, together yield a behaviour where noise \textit{appears} to be beneficial.

no code implementations • 23 Nov 2014 • Aaditya Ramdas, Sashank J. Reddi, Barnabas Poczos, Aarti Singh, Larry Wasserman

The current literature is split into two kinds of tests - those which are consistent without any assumptions about how the distributions may differ (\textit{general} alternatives), and those which are designed to specifically test easier alternatives, like a difference in means (\textit{mean-shift} alternatives).

no code implementations • 20 Jun 2014 • Aaditya Ramdas

This paper is about randomized iterative algorithms for solving a linear system of equations $X \beta = y$ in different settings.

no code implementations • 20 Jun 2014 • Aaditya Ramdas, Javier Peña

Given a matrix $A$, a linear feasibility problem (of which linear classification is a special case) aims to find a solution to a primal problem $w: A^Tw > \textbf{0}$ or a certificate for the dual problem which is a probability distribution $p: Ap = \textbf{0}$.

no code implementations • 9 Jun 2014 • Sashank J. Reddi, Aaditya Ramdas, Barnabás Póczos, Aarti Singh, Larry Wasserman

This paper is about two related decision theoretic problems, nonparametric two-sample testing and independence testing.

4 code implementations • 9 Jun 2014 • Aaditya Ramdas, Ryan J. Tibshirani

This paper presents a fast and robust algorithm for trend filtering, a recently developed nonparametric regression tool.

no code implementations • 7 Jun 2014 • Aaditya Ramdas, Leila Wehbe

This paper deals with the problem of nonparametric independence testing, a fundamental decision-theoretic problem that asks if two arbitrary (possibly multivariate) random variables $X, Y$ are independent or not, a question that comes up in many fields like causality and neuroscience.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.