Search Results for author: Aaditya Ramdas

Found 90 papers, 37 papers with code

Online Control of the False Coverage Rate and False Sign Rate

no code implementations ICML 2020 Asaf Weinstein, Aaditya Ramdas

Here, we consider the general problem of FCR control in the online setting, where there is an infinite sequence of fixed unknown parameters ordered by time.

Prediction Intervals

Semiparametric Efficient Inference in Adaptive Experiments

no code implementations30 Nov 2023 Thomas Cook, Alan Mishler, Aaditya Ramdas

This central limit theorem enables efficient inference at fixed sample sizes.


Time-Uniform Confidence Spheres for Means of Random Vectors

no code implementations14 Nov 2023 Ben Chugg, Hongjian Wang, Aaditya Ramdas

We derive and study time-uniform confidence spheres - termed confidence sphere sequences (CSSs) - which contain the mean of random vectors with high probability simultaneously across all sample sizes.


Online multiple testing with e-values

no code implementations10 Nov 2023 Ziyu Xu, Aaditya Ramdas

A scientist tests a continuous stream of hypotheses over time in the course of her investigation -- she does not test a predetermined, fixed number of hypotheses.


Deep anytime-valid hypothesis testing

no code implementations30 Oct 2023 Teodora Pandeva, Patrick Forré, Aaditya Ramdas, Shubhanshu Shekhar

We propose a general framework for constructing powerful, sequential hypothesis tests for a large class of nonparametric testing problems.

Adversarial Robustness Two-sample testing +1

Anytime-valid t-tests and confidence sequences for Gaussian means with unknown variance

no code implementations5 Oct 2023 Hongjian Wang, Aaditya Ramdas

These are respectively obtained by swapping Lai's flat mixture for a Gaussian mixture, and swapping the right Haar mixture over $\sigma$ with the maximum likelihood estimate under the null, as done in universal inference.


On the near-optimality of betting confidence sets for bounded means

no code implementations2 Oct 2023 Shubhanshu Shekhar, Aaditya Ramdas

Constructing nonasymptotic confidence intervals (CIs) for the mean of a univariate distribution from independent and identically distributed (i. i. d.)

Reducing sequential change detection to sequential estimation

no code implementations16 Sep 2023 Shubhanshu Shekhar, Aaditya Ramdas

We consider the problem of sequential change detection, where the goal is to design a scheme for detecting any changes in a parameter or functional $\theta$ of the data stream distribution that has small detection delay, but guarantees control on the frequency of false alarms in the absence of changes.

Change Detection

Differentially Private Conditional Independence Testing

no code implementations11 Jun 2023 Iden Kalemaj, Shiva Prasad Kasiviswanathan, Aaditya Ramdas

We provide theoretical guarantees on the performance of our tests and validate them empirically.


Auditing Fairness by Betting

1 code implementation NeurIPS 2023 Ben Chugg, Santiago Cortes-Gomez, Bryan Wilder, Aaditya Ramdas

Whereas previous work relies on a fixed-sample size, our methods are sequential and allow for the continuous monitoring of incoming data, making them highly amenable to tracking the fairness of real-world systems.

Fairness valid

Counterfactually Comparing Abstaining Classifiers

1 code implementation NeurIPS 2023 Yo Joong Choe, Aditya Gangrade, Aaditya Ramdas

When evaluating black-box abstaining classifier(s), however, we lack a principled approach that accounts for what the classifier would have predicted on its abstentions.

Causal Inference counterfactual +1

Risk-limiting Financial Audits via Weighted Sampling without Replacement

no code implementations8 May 2023 Shubhanshu Shekhar, Ziyu Xu, Zachary C. Lipton, Pierre J. Liang, Aaditya Ramdas

Next, we develop methods to improve the quality of CSs by incorporating side information about the unknown values associated with each item.

Online Platt Scaling with Calibeating

no code implementations28 Apr 2023 Chirag Gupta, Aaditya Ramdas

We present an online post-hoc calibration method, called Online Platt Scaling (OPS), which combines the Platt scaling technique with online logistic regression.

The extended Ville's inequality for nonintegrable nonnegative supermartingales

no code implementations3 Apr 2023 Hongjian Wang, Aaditya Ramdas

Following initial work by Robbins, we rigorously present an extended theory of nonnegative supermartingales, requiring neither integrability nor finiteness.

Sequential change detection via backward confidence sequences

no code implementations6 Feb 2023 Shubhanshu Shekhar, Aaditya Ramdas

We present a simple reduction from sequential estimation to sequential changepoint detection (SCD).

Change Detection

Huber-Robust Confidence Sequences

no code implementations23 Jan 2023 Hongjian Wang, Aaditya Ramdas

Confidence sequences are confidence intervals that can be sequentially tracked, and are valid at arbitrary data-dependent stopping times.


A Permutation-Free Kernel Independence Test

no code implementations18 Dec 2022 Shubhanshu Shekhar, Ilmun Kim, Aaditya Ramdas

In nonparametric independence testing, we observe i. i. d.\ data $\{(X_i, Y_i)\}_{i=1}^n$, where $X \in \mathcal{X}, Y \in \mathcal{Y}$ lie in any general spaces, and we wish to test the null that $X$ is independent of $Y$.


Sequential Kernelized Independence Testing

no code implementations14 Dec 2022 Aleksandr Podkopaev, Patrick Blöbaum, Shiva Prasad Kasiviswanathan, Aaditya Ramdas

Independence testing is a classical statistical problem that has been extensively studied in the batch setting when one fixes the sample size before collecting data.


A Permutation-free Kernel Two-Sample Test

no code implementations27 Nov 2022 Shubhanshu Shekhar, Ilmun Kim, Aaditya Ramdas

The usual kernel-MMD test statistic is a degenerate U-statistic under the null, and thus it has an intractable limiting distribution.

Test Two-sample testing +1

Anytime-valid off-policy inference for contextual bandits

1 code implementation19 Oct 2022 Ian Waudby-Smith, Lili Wu, Aaditya Ramdas, Nikos Karampatziakis, Paul Mineiro

Importantly, our methods can be employed while the original experiment is still running (that is, not necessarily post-hoc), when the logging policy may be itself changing (due to learning), and even if the context distributions are a highly dependent time-series (such as if they are drifting over time).

counterfactual Multi-Armed Bandits +3

QuTE: decentralized multiple testing on sensor networks with false discovery rate control

no code implementations9 Oct 2022 Aaditya Ramdas, Jianbo Chen, Martin J. Wainwright, Michael I. Jordan

We consider the setting where distinct agents reside on the nodes of an undirected graph, and each agent possesses p-values corresponding to one or more hypotheses local to its node.

Brownian Noise Reduction: Maximizing Privacy Subject to Accuracy Constraints

no code implementations15 Jun 2022 Justin Whitehouse, Zhiwei Steven Wu, Aaditya Ramdas, Ryan Rogers

In this work, we generalize noise reduction to the setting of Gaussian noise, introducing the Brownian mechanism.

Faster online calibration without randomization: interval forecasts and the power of two choices

no code implementations27 Apr 2022 Chirag Gupta, Aaditya Ramdas

We study the problem of making calibrated probabilistic forecasts for a binary sequence generated by an adversarial nature.

Fully Adaptive Composition in Differential Privacy

no code implementations10 Mar 2022 Justin Whitehouse, Aaditya Ramdas, Ryan Rogers, Zhiwei Steven Wu

However, these results require that the privacy parameters of all algorithms be fixed before interacting with the data.

Nonparametric extensions of randomized response for private confidence sets

1 code implementation17 Feb 2022 Ian Waudby-Smith, Zhiwei Steven Wu, Aaditya Ramdas

This work derives methods for performing nonparametric, nonasymptotic statistical inference for population means under the constraint of local differential privacy (LDP).

Data fission: splitting a single data point

no code implementations21 Dec 2021 James Leiner, Boyan Duan, Larry Wasserman, Aaditya Ramdas

Rasines and Young (2022) offers an alternative route of accomplishing this task through randomization of $X$ with additive Gaussian noise which enables post-selection inference in finite samples for Gaussian distributed data and asymptotically for non-Gaussian additive models.

Additive models Bayesian Inference

Best Arm Identification under Additive Transfer Bandits

no code implementations8 Dec 2021 Ojash Neopane, Aaditya Ramdas, Aarti Singh

We consider a variant of the best arm identification (BAI) problem in multi-armed bandits (MAB) in which there are two sets of arms (source and target), and the objective is to determine the best target arm while only pulling source arms.

Multi-Armed Bandits Transfer Learning

Universal Inference Meets Random Projections: A Scalable Test for Log-concavity

2 code implementations17 Nov 2021 Robin Dunn, Aditya Gangrade, Larry Wasserman, Aaditya Ramdas

Shape constraints yield flexible middle grounds between fully nonparametric and fully parametric approaches to modeling distributions of data.

Test valid

Tracking the risk of a deployed model and detecting harmful distribution shifts

no code implementations ICLR 2022 Aleksandr Podkopaev, Aaditya Ramdas

When deployed in the real world, machine learning models inevitably encounter changes in the data distribution, and certain -- but not all -- distribution shifts could result in significant performance degradation.

Comparing Sequential Forecasters

1 code implementation30 Sep 2021 Yo Joong Choe, Aaditya Ramdas

Consider two forecasters, each making a single prediction for a sequence of events over time.


A unified framework for bandit multiple testing

1 code implementation NeurIPS 2021 Ziyu Xu, Ruodu Wang, Aaditya Ramdas

In bandit multiple hypothesis testing, each arm corresponds to a different null hypothesis that we wish to test, and the goal is to design adaptive algorithms that correctly identify large set of interesting arms (true discoveries), while only mistakenly identifying a few uninteresting ones (false discoveries).


Martingale Methods for Sequential Estimation of Convex Functionals and Divergences

1 code implementation16 Mar 2021 Tudor Manole, Aaditya Ramdas

We present a unified technique for sequential estimation of convex divergences between distributions, including integral probability metrics like the kernel maximum mean discrepancy, $\varphi$-divergences like the Kullback-Leibler divergence, and optimal transport costs, such as powers of Wasserstein distances.


Time-uniform central limit theory and asymptotic confidence sequences

2 code implementations11 Mar 2021 Ian Waudby-Smith, David Arbour, Ritwik Sinha, Edward H. Kennedy, Aaditya Ramdas

CSs provide valid inference at arbitrary stopping times, incurring no penalties for "peeking" at the data, unlike classical confidence intervals which require the sample size to be fixed in advance.

Causal Inference valid

Distribution-free uncertainty quantification for classification under label shift

no code implementations4 Mar 2021 Aleksandr Podkopaev, Aaditya Ramdas

Piggybacking on recent progress in addressing label shift (for better prediction), we examine the right way to achieve UQ by reweighting the aforementioned conformal and calibration procedures whenever some unlabeled data from the target distribution is available.

Classification Conformal Prediction +1

Large-scale simultaneous inference under dependence

1 code implementation22 Feb 2021 Jinjin Tian, Xu Chen, Eugene Katsevich, Jelle Goeman, Aaditya Ramdas

Simultaneous inference allows for the exploration of data while deciding on criteria for proclaiming discoveries.

Statistics Theory Methodology Statistics Theory

Off-policy Confidence Sequences

no code implementations18 Feb 2021 Nikos Karampatziakis, Paul Mineiro, Aaditya Ramdas

We develop confidence bounds that hold uniformly over time for off-policy evaluation in the contextual bandit setting.

Off-policy evaluation valid

Dimension-agnostic inference using cross U-statistics

no code implementations10 Nov 2020 Ilmun Kim, Aaditya Ramdas

Classical asymptotic theory for statistical inference usually involves calibrating a statistic by fixing the dimension $d$ while letting the sample size $n$ increase to infinity.

Two-sample testing

Dynamic Algorithms for Online Multiple Testing

1 code implementation26 Oct 2020 Ziyu Xu, Aaditya Ramdas

This statistical advance is enabled by the development of new algorithmic ideas: earlier algorithms are more "static" while our new ones allow for the dynamical adjustment of testing levels based on the amount of wealth the algorithm has accumulated.

Estimating means of bounded random variables by betting

3 code implementations19 Oct 2020 Ian Waudby-Smith, Aaditya Ramdas

This paper derives confidence intervals (CI) and time-uniform confidence sequences (CS) for the classical problem of estimating an unknown mean from bounded observations.

Distribution-free binary classification: prediction sets, confidence intervals and calibration

1 code implementation NeurIPS 2020 Chirag Gupta, Aleksandr Podkopaev, Aaditya Ramdas

We study three notions of uncertainty quantification -- calibration, confidence intervals and prediction sets -- for binary classification in the distribution-free setting, that is without making any distributional assumptions on the data.

Binary Classification Classification +1

The leave-one-covariate-out conditional randomization test

1 code implementation15 Jun 2020 Eugene Katsevich, Aaditya Ramdas

Conditional independence testing is an important problem, yet provably hard without assumptions.

Test valid

Uncertainty quantification using martingales for misspecified Gaussian processes

1 code implementation12 Jun 2020 Willie Neiswanger, Aaditya Ramdas

There is a necessary cost to achieving robustness: if the prior was correct, posterior GP bands are narrower than our CS.

Bayesian Optimization Gaussian Processes +1

Confidence sequences for sampling without replacement

3 code implementations NeurIPS 2020 Ian Waudby-Smith, Aaditya Ramdas

We then present Hoeffding- and empirical-Bernstein-type time-uniform CSs and fixed-time confidence intervals for sampling WoR, which improve on previous bounds in the literature and explicitly quantify the benefit of WoR sampling.

Fast and Powerful Conditional Randomization Testing via Distillation

1 code implementation6 Jun 2020 Molei Liu, Eugene Katsevich, Lucas Janson, Aaditya Ramdas

We propose the distilled CRT, a novel approach to using state-of-the-art machine learning algorithms in the CRT while drastically reducing the number of times those algorithms need to be run, thereby taking advantage of their power and the CRT's statistical guarantees without suffering the usual computational expense.


On the power of conditional independence testing under model-X

1 code implementation12 May 2020 Eugene Katsevich, Aaditya Ramdas

For testing conditional independence (CI) of a response Y and a predictor X given covariates Z, the recently introduced model-X (MX) framework has been the subject of active methodological research, especially in the context of MX knockoffs and their successful application to genome-wide association studies.

Causal Inference LEMMA +1

Familywise Error Rate Control by Interactive Unmasking

1 code implementation ICML 2020 Boyan Duan, Aaditya Ramdas, Larry Wasserman

We propose a method for multiple hypothesis testing with familywise error rate (FWER) control, called the i-FWER test.


On conditional versus marginal bias in multi-armed bandits

no code implementations ICML 2020 Jaehyeok Shin, Aaditya Ramdas, Alessandro Rinaldo

The bias of the sample means of the arms in multi-armed bandits is an important issue in adaptive data analysis that has recently received considerable attention in the literature.

Multi-Armed Bandits

Universal Inference

no code implementations24 Dec 2019 Larry Wasserman, Aaditya Ramdas, Sivaraman Balakrishnan

Constructing tests and confidence sets for such models is notoriously difficult.

Test valid

The Power of Batching in Multiple Hypothesis Testing

no code implementations11 Oct 2019 Tijana Zrnic, Daniel L. Jiang, Aaditya Ramdas, Michael. I. Jordan

One important partition of algorithms for controlling the false discovery rate (FDR) in multiple testing is into offline and online algorithms.

Two-sample testing

Online control of the familywise error rate

1 code implementation10 Oct 2019 Jinjin Tian, Aaditya Ramdas

Biological research often involves testing a growing number of null hypotheses as new data is accumulated over time.

Path Length Bounds for Gradient Descent and Flow

no code implementations2 Aug 2019 Chirag Gupta, Sivaraman Balakrishnan, Aaditya Ramdas

We derive bounds on the path length $\zeta$ of gradient descent (GD) and gradient flow (GF) curves for various classes of smooth convex and nonconvex functions.

Sequential estimation of quantiles with applications to A/B-testing and best-arm identification

4 code implementations24 Jun 2019 Steven R. Howard, Aaditya Ramdas

We propose confidence sequences -- sequences of confidence intervals which are valid uniformly over time -- for quantiles of any distribution over a complete, fully-ordered set, based on a stream of i. i. d.

Test valid

Are sample means in multi-armed bandits positively or negatively biased?

no code implementations NeurIPS 2019 Jaehyeok Shin, Aaditya Ramdas, Alessandro Rinaldo

It is well known that in stochastic multi-armed bandits (MAB), the sample mean of an arm is typically not an unbiased estimator of its true mean.

Multi-Armed Bandits Selection bias

Predictive inference with the jackknife+

no code implementations8 May 2019 Rina Foygel Barber, Emmanuel J. Candes, Aaditya Ramdas, Ryan J. Tibshirani

This paper introduces the jackknife+, which is a novel method for constructing predictive confidence intervals.


Conformal Prediction Under Covariate Shift

1 code implementation NeurIPS 2019 Rina Foygel Barber, Emmanuel J. Candes, Aaditya Ramdas, Ryan J. Tibshirani

We extend conformal prediction methodology beyond the case of exchangeable data.


A Higher-Order Kolmogorov-Smirnov Test

no code implementations24 Mar 2019 Veeranjaneyulu Sadhanala, Yu-Xiang Wang, Aaditya Ramdas, Ryan J. Tibshirani

We present an extension of the Kolmogorov-Smirnov (KS) two-sample test, which can be more sensitive to differences in the tails.


The limits of distribution-free conditional predictive inference

no code implementations12 Mar 2019 Rina Foygel Barber, Emmanuel J. Candès, Aaditya Ramdas, Ryan J. Tibshirani

We consider the problem of distribution-free predictive inference, with the goal of producing predictive coverage guarantees that hold conditionally rather than marginally.

Statistics Theory Statistics Theory

Asynchronous Online Testing of Multiple Hypotheses

2 code implementations12 Dec 2018 Tijana Zrnic, Aaditya Ramdas, Michael. I. Jordan

We consider the problem of asynchronous online testing, aimed at providing control of the false discovery rate (FDR) during a continual stream of data collection and testing, where each test may be a sequential test that can start and stop at arbitrary times.


Time-uniform, nonparametric, nonasymptotic confidence sequences

4 code implementations18 Oct 2018 Steven R. Howard, Aaditya Ramdas, Jon McAuliffe, Jasjeet Sekhon

A confidence sequence is a sequence of confidence intervals that is uniformly valid over an unbounded time horizon.

Statistics Theory Probability Methodology Statistics Theory

Towards "simultaneous selective inference": post-hoc bounds on the false discovery proportion

1 code implementation19 Mar 2018 Eugene Katsevich, Aaditya Ramdas

In this paper, we show that the entire path of rejection sets considered by a variety of existing FDR procedures (like BH, knockoffs, and many others) can be endowed with simultaneous high-probability bounds on FDP.

Statistics Theory Statistics Theory

SAFFRON: an adaptive algorithm for online control of the false discovery rate

1 code implementation ICML 2018 Aaditya Ramdas, Tijana Zrnic, Martin Wainwright, Michael Jordan

However, unlike older methods, SAFFRON's threshold sequence is based on a novel estimate of the alpha fraction that it allocates to true null hypotheses.

Online control of the false discovery rate with decaying memory

1 code implementation NeurIPS 2017 Aaditya Ramdas, Fanny Yang, Martin J. Wainwright, Michael. I. Jordan

In the online multiple testing problem, p-values corresponding to different null hypotheses are observed one by one, and the decision of whether or not to reject the current hypothesis must be made immediately, after which the next p-value is observed.


DAGGER: A sequential algorithm for FDR control on DAGs

1 code implementation29 Sep 2017 Aaditya Ramdas, Jianbo Chen, Martin J. Wainwright, Michael. I. Jordan

We propose a linear-time, single-pass, top-down algorithm for multiple testing on directed acyclic graphs (DAGs), where nodes represent hypotheses and edges specify a partial ordering in which hypotheses must be tested.

Model Selection

A framework for Multi-A(rmed)/B(andit) testing with online FDR control

1 code implementation NeurIPS 2017 Fanny Yang, Aaditya Ramdas, Kevin Jamieson, Martin J. Wainwright

We propose an alternative framework to existing setups for controlling false alarms when multiple A/B tests are run over time.

Test valid

A unified treatment of multiple testing with prior knowledge using the p-filter

no code implementations18 Mar 2017 Aaditya Ramdas, Rina Foygel Barber, Martin J. Wainwright, Michael. I. Jordan

There is a significant literature on methods for incorporating knowledge into multiple testing procedures so as to improve their power and precision.

Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy

1 code implementation14 Nov 2016 Danica J. Sutherland, Hsiao-Yu Tung, Heiko Strathmann, Soumyajit De, Aaditya Ramdas, Alex Smola, Arthur Gretton

In this context, the MMD may be used in two roles: first, as a discriminator, either directly on the samples, or on features of the samples.


Function-Specific Mixing Times and Concentration Away from Equilibrium

no code implementations6 May 2016 Maxim Rabinovich, Aaditya Ramdas, Michael. I. Jordan, Martin J. Wainwright

These results show that it is possible for empirical expectations of functions to concentrate long before the underlying chain has mixed in the classical sense, and we show that the concentration rates we achieve are optimal up to constants.

On kernel methods for covariates that are rankings

no code implementations25 Mar 2016 Horia Mania, Aaditya Ramdas, Martin J. Wainwright, Michael. I. Jordan, Benjamin Recht

This paper studies the use of reproducing kernel Hilbert space methods for learning from permutation-valued features.

regression Test

Asymptotic behavior of $\ell_p$-based Laplacian regularization in semi-supervised learning

no code implementations2 Mar 2016 Ahmed El Alaoui, Xiang Cheng, Aaditya Ramdas, Martin J. Wainwright, Michael. I. Jordan

Together, these properties show that $p = d+1$ is an optimal choice, yielding a function estimate $\hat{f}$ that is both smooth and non-degenerate, while remaining maximally sensitive to $P$.

Classification accuracy as a proxy for two sample testing

no code implementations6 Feb 2016 Ilmun Kim, Aaditya Ramdas, Aarti Singh, Larry Wasserman

We prove two results that hold for all classifiers in any dimensions: if its true error remains $\epsilon$-better than chance for some $\epsilon>0$ as $d, n \to \infty$, then (a) the permutation-based test is consistent (has power approaching to one), (b) a computationally efficient test based on a Gaussian approximation of the null distribution is also consistent.

Classification General Classification +3

Minimax Lower Bounds for Linear Independence Testing

no code implementations23 Jan 2016 Aaditya Ramdas, David Isenberg, Aarti Singh, Larry Wasserman

Linear independence testing is a fundamental information-theoretic and statistical problem that can be posed as follows: given $n$ points $\{(X_i, Y_i)\}^n_{i=1}$ from a $p+q$ dimensional multivariate distribution where $X_i \in \mathbb{R}^p$ and $Y_i \in\mathbb{R}^q$, determine whether $a^T X$ and $b^T Y$ are uncorrelated for every $a \in \mathbb{R}^p, b\in \mathbb{R}^q$ or not.

Test Two-sample testing

The p-filter: multi-layer FDR control for grouped hypotheses

no code implementations10 Dec 2015 Rina Foygel Barber, Aaditya Ramdas

In many practical applications of multiple hypothesis testing using the False Discovery Rate (FDR), the given hypotheses can be naturally partitioned into groups, and one may not only want to control the number of false discoveries (wrongly rejected null hypotheses), but also the number of falsely discovered groups of hypotheses (we say a group is falsely discovered if at least one hypothesis within that group is rejected, when in reality the group contains only nulls).

Two-sample testing

On Wasserstein Two Sample Testing and Related Families of Nonparametric Tests

1 code implementation8 Sep 2015 Aaditya Ramdas, Nicolas Garcia, Marco Cuturi

In this work, our central object is the Wasserstein distance, as we form a chain of connections from univariate methods like the Kolmogorov-Smirnov test, PP/QQ plots and ROC/ODC curves, to multivariate tests involving energy statistics and kernel based maximum mean discrepancy.

Test Two-sample testing +1

Adaptivity and Computation-Statistics Tradeoffs for Kernel and Distance based High Dimensional Two Sample Testing

no code implementations4 Aug 2015 Aaditya Ramdas, Sashank J. Reddi, Barnabas Poczos, Aarti Singh, Larry Wasserman

We formally characterize the power of popular tests for GDA like the Maximum Mean Discrepancy with the Gaussian kernel (gMMD) and bandwidth-dependent variants of the Energy Distance with the Euclidean norm (eED) in the high-dimensional MDA regime.

Test Two-sample testing

Fast Two-Sample Testing with Analytic Representations of Probability Measures

1 code implementation NeurIPS 2015 Kacper Chwialkowski, Aaditya Ramdas, Dino Sejdinovic, Arthur Gretton

The new tests are consistent against a larger class of alternatives than the previous linear-time tests based on the (non-smoothed) empirical characteristic functions, while being much faster than the current state-of-the-art quadratic-time kernel-based or energy distance-based tests.

Test Two-sample testing +1

Sequential Nonparametric Testing with the Law of the Iterated Logarithm

1 code implementation10 Jun 2015 Akshay Balsubramani, Aaditya Ramdas

It is novel in several ways: (a) it takes linear time and constant space to compute on the fly, (b) it has the same power guarantee as a non-sequential version of the test with the same computational constraints up to a small factor, and (c) it accesses only as many samples as are required - its stopping time adapts to the unknown difficulty of the problem.

Test Two-sample testing

Algorithmic Connections Between Active Learning and Stochastic Convex Optimization

no code implementations15 May 2015 Aaditya Ramdas, Aarti Singh

Combining these two parts yields an algorithm that solves stochastic convex optimization of uniformly convex and smooth functions using only noisy gradient signs by repeatedly performing active learning, achieves optimal rates and is adaptive to all unknown convexity and smoothness parameters.

Active Learning

Margins, Kernels and Non-linear Smoothed Perceptrons

no code implementations15 May 2015 Aaditya Ramdas, Javier Peña

This allows us to give guarantees for a primal-dual algorithm that halts in $\min\{\tfrac{\sqrt n}{|\rho|}, \tfrac{\sqrt n}{\epsilon}\}$ iterations with a perfect separator in the RKHS if the primal is feasible or a dual $\epsilon$-certificate of near-infeasibility.

An Analysis of Active Learning With Uniform Feature Noise

no code implementations15 May 2015 Aaditya Ramdas, Barnabas Poczos, Aarti Singh, Larry Wasserman

For larger $\sigma$, the \textit{unflattening} of the regression function on convolution with uniform noise, along with its local antisymmetry around the threshold, together yield a behaviour where noise \textit{appears} to be beneficial.

Active Learning Binary Classification +1

On the High-dimensional Power of Linear-time Kernel Two-Sample Testing under Mean-difference Alternatives

no code implementations23 Nov 2014 Aaditya Ramdas, Sashank J. Reddi, Barnabas Poczos, Aarti Singh, Larry Wasserman

The current literature is split into two kinds of tests - those which are consistent without any assumptions about how the distributions may differ (\textit{general} alternatives), and those which are designed to specifically test easier alternatives, like a difference in means (\textit{mean-shift} alternatives).

Test Two-sample testing

Rows vs Columns for Linear Systems of Equations - Randomized Kaczmarz or Coordinate Descent?

no code implementations20 Jun 2014 Aaditya Ramdas

This paper is about randomized iterative algorithms for solving a linear system of equations $X \beta = y$ in different settings.

Towards A Deeper Geometric, Analytic and Algorithmic Understanding of Margins

no code implementations20 Jun 2014 Aaditya Ramdas, Javier Peña

Given a matrix $A$, a linear feasibility problem (of which linear classification is a special case) aims to find a solution to a primal problem $w: A^Tw > \textbf{0}$ or a certificate for the dual problem which is a probability distribution $p: Ap = \textbf{0}$.

Fast and Flexible ADMM Algorithms for Trend Filtering

4 code implementations9 Jun 2014 Aaditya Ramdas, Ryan J. Tibshirani

This paper presents a fast and robust algorithm for trend filtering, a recently developed nonparametric regression tool.

Nonparametric Independence Testing for Small Sample Sizes

no code implementations7 Jun 2014 Aaditya Ramdas, Leila Wehbe

This paper deals with the problem of nonparametric independence testing, a fundamental decision-theoretic problem that asks if two arbitrary (possibly multivariate) random variables $X, Y$ are independent or not, a question that comes up in many fields like causality and neuroscience.


Cannot find the paper you are looking for? You can Submit a new open access paper.