no code implementations • ICML 2020 • Asaf Weinstein, Aaditya Ramdas
Here, we consider the general problem of FCR control in the online setting, where there is an infinite sequence of fixed unknown parameters ordered by time.
no code implementations • 21 Nov 2024 • Ojash Neopane, Aaditya Ramdas, Aarti Singh
Estimation of the Average Treatment Effect (ATE) is a core problem in causal inference with strong connections to Off-Policy Evaluation in Reinforcement Learning.
no code implementations • 14 Nov 2024 • Hongjian Wang, Aaditya Ramdas
We present two sharp empirical Bernstein inequalities for symmetric random matrices with bounded eigenvalues.
no code implementations • 11 Oct 2024 • Michelle Zhao, Reid Simmons, Henny Admoni, Aaditya Ramdas, Andrea Bajcsy
From the interactive IL side, we develop ConformalDAgger, a new approach wherein the robot uses prediction intervals calibrated by IQT as a reliable measure of deployment-time uncertainty to actively query for more expert feedback.
no code implementations • 9 Oct 2024 • Putra Manggala, Atalanti Mastakouri, Elke Kirschbaum, Shiva Prasad Kasiviswanathan, Aaditya Ramdas
To use generative question-and-answering (QA) systems for decision-making and in any critical application, these systems need to provide well-calibrated confidence scores that reflect the correctness of their answers.
no code implementations • 26 Sep 2024 • Diego Martinez-Taboada, Aaditya Ramdas
We present a sequential version of the kernelized Stein discrepancy, which allows for conducting goodness-of-fit tests for unnormalized densities that are continuously monitored and adaptively stopped.
no code implementations • 18 Aug 2024 • Abhinandan Dalal, Patrick Blöbaum, Shiva Kasiviswanathan, Aaditya Ramdas
This can be of particular concern in large scale experimental studies with huge financial costs or human lives at stake, as well as in observational studies where the length of confidence of intervals do not shrink to zero even with increasing sample size due to partial identifiability of a structural parameter.
1 code implementation • 22 Mar 2024 • Matteo Gasparin, Aaditya Ramdas
Conformal prediction equips machine learning models with a reasonable notion of uncertainty quantification without making strong distributional assumptions.
1 code implementation • 15 Feb 2024 • Yo Joong Choe, Aaditya Ramdas
We first establish that a class of functions called adjusters allows us to lift e-processes from a coarser filtration into any finer filtration.
no code implementations • 28 Jan 2024 • Hongjian Wang, Aaditya Ramdas
We present new concentration inequalities for either martingale dependent or exchangeable random symmetric matrices under a variety of tail conditions, encompassing now-standard Chernoff bounds to self-normalized heavy-tailed settings.
no code implementations • 30 Nov 2023 • Thomas Cook, Alan Mishler, Aaditya Ramdas
This central limit theorem enables efficient inference at fixed sample sizes.
no code implementations • 14 Nov 2023 • Ben Chugg, Hongjian Wang, Aaditya Ramdas
Our results include a dimension-free CSS for log-concave random vectors, a dimension-free CSS for sub-Gaussian random vectors, and CSSs for sub-$\psi$ random vectors (which includes sub-gamma, sub-Poisson, and sub-exponential distributions).
no code implementations • 10 Nov 2023 • Ziyu Xu, Aaditya Ramdas
A scientist tests a continuous stream of hypotheses over time in the course of her investigation -- she does not test a predetermined, fixed number of hypotheses.
1 code implementation • 30 Oct 2023 • Teodora Pandeva, Patrick Forré, Aaditya Ramdas, Shubhanshu Shekhar
We propose a general framework for constructing powerful, sequential hypothesis tests for a large class of nonparametric testing problems.
no code implementations • 5 Oct 2023 • Hongjian Wang, Aaditya Ramdas
These are respectively obtained by swapping Lai's flat mixture for a Gaussian mixture, and swapping the right Haar mixture over $\sigma$ with the maximum likelihood estimate under the null, as done in universal inference.
no code implementations • 2 Oct 2023 • Shubhanshu Shekhar, Aaditya Ramdas
Constructing nonasymptotic confidence intervals (CIs) for the mean of a univariate distribution from independent and identically distributed (i. i. d.)
no code implementations • 16 Sep 2023 • Shubhanshu Shekhar, Aaditya Ramdas
We consider the problem of sequential change detection, where the goal is to design a scheme for detecting any changes in a parameter or functional $\theta$ of the data stream distribution that has small detection delay, but guarantees control on the frequency of false alarms in the absence of changes.
no code implementations • NeurIPS 2023 • Ryan Rogers, Gennady Samorodnitsky, Zhiwei Steven Wu, Aaditya Ramdas
In many practical applications of differential privacy, practitioners seek to provide the best privacy guarantees subject to a target level of accuracy.
no code implementations • 11 Jun 2023 • Iden Kalemaj, Shiva Prasad Kasiviswanathan, Aaditya Ramdas
We provide theoretical guarantees on the performance of our tests and validate them empirically.
1 code implementation • NeurIPS 2023 • Ben Chugg, Santiago Cortes-Gomez, Bryan Wilder, Aaditya Ramdas
Whereas previous work relies on a fixed-sample size, our methods are sequential and allow for the continuous monitoring of incoming data, making them highly amenable to tracking the fairness of real-world systems.
1 code implementation • NeurIPS 2023 • Yo Joong Choe, Aditya Gangrade, Aaditya Ramdas
When evaluating black-box abstaining classifier(s), however, we lack a principled approach that accounts for what the classifier would have predicted on its abstentions.
no code implementations • 8 May 2023 • Shubhanshu Shekhar, Ziyu Xu, Zachary C. Lipton, Pierre J. Liang, Aaditya Ramdas
Next, we develop methods to improve the quality of CSs by incorporating side information about the unknown values associated with each item.
no code implementations • 28 Apr 2023 • Chirag Gupta, Aaditya Ramdas
We present an online post-hoc calibration method, called Online Platt Scaling (OPS), which combines the Platt scaling technique with online logistic regression.
no code implementations • 3 Apr 2023 • Hongjian Wang, Aaditya Ramdas
Following the initial work by Robbins, we rigorously present an extended theory of nonnegative supermartingales, requiring neither integrability nor finiteness.
no code implementations • 7 Feb 2023 • Ben Chugg, Hongjian Wang, Aaditya Ramdas
We present a unified framework for deriving PAC-Bayesian generalization bounds.
no code implementations • 6 Feb 2023 • Shubhanshu Shekhar, Aaditya Ramdas
We present a simple reduction from sequential estimation to sequential changepoint detection (SCD).
no code implementations • 23 Jan 2023 • Hongjian Wang, Aaditya Ramdas
Confidence sequences are confidence intervals that can be sequentially tracked, and are valid at arbitrary data-dependent stopping times.
no code implementations • 18 Dec 2022 • Shubhanshu Shekhar, Ilmun Kim, Aaditya Ramdas
In nonparametric independence testing, we observe i. i. d.\ data $\{(X_i, Y_i)\}_{i=1}^n$, where $X \in \mathcal{X}, Y \in \mathcal{Y}$ lie in any general spaces, and we wish to test the null that $X$ is independent of $Y$.
1 code implementation • 14 Dec 2022 • Aleksandr Podkopaev, Patrick Blöbaum, Shiva Prasad Kasiviswanathan, Aaditya Ramdas
Independence testing is a classical statistical problem that has been extensively studied in the batch setting when one fixes the sample size before collecting data.
1 code implementation • 27 Nov 2022 • Shubhanshu Shekhar, Ilmun Kim, Aaditya Ramdas
The usual kernel-MMD test statistic is a degenerate U-statistic under the null, and thus it has an intractable limiting distribution.
1 code implementation • 19 Oct 2022 • Ian Waudby-Smith, Lili Wu, Aaditya Ramdas, Nikos Karampatziakis, Paul Mineiro
Importantly, our methods can be employed while the original experiment is still running (that is, not necessarily post-hoc), when the logging policy may be itself changing (due to learning), and even if the context distributions are a highly dependent time-series (such as if they are drifting over time).
no code implementations • 9 Oct 2022 • Aaditya Ramdas, Jianbo Chen, Martin J. Wainwright, Michael I. Jordan
We consider the setting where distinct agents reside on the nodes of an undirected graph, and each agent possesses p-values corresponding to one or more hypotheses local to its node.
no code implementations • 15 Jun 2022 • Justin Whitehouse, Zhiwei Steven Wu, Aaditya Ramdas, Ryan Rogers
In this work, we generalize noise reduction to the setting of Gaussian noise, introducing the Brownian mechanism.
no code implementations • 27 Apr 2022 • Chirag Gupta, Aaditya Ramdas
We study the problem of making calibrated probabilistic forecasts for a binary sequence generated by an adversarial nature.
no code implementations • 10 Mar 2022 • Justin Whitehouse, Aaditya Ramdas, Ryan Rogers, Zhiwei Steven Wu
However, these results require that the privacy parameters of all algorithms be fixed before interacting with the data.
1 code implementation • 7 Mar 2022 • Jaehyeok Shin, Aaditya Ramdas, Alessandro Rinaldo
Sequential change detection is a classical problem with a variety of applications.
1 code implementation • 17 Feb 2022 • Ian Waudby-Smith, Zhiwei Steven Wu, Aaditya Ramdas
This work derives methods for performing nonparametric, nonasymptotic statistical inference for population means under the constraint of local differential privacy (LDP).
no code implementations • 21 Dec 2021 • James Leiner, Boyan Duan, Larry Wasserman, Aaditya Ramdas
Rasines and Young (2022) offers an alternative approach that uses additive Gaussian noise -- this enables post-selection inference in finite samples for Gaussian distributed data and asymptotically when errors are non-Gaussian.
no code implementations • 8 Dec 2021 • Ojash Neopane, Aaditya Ramdas, Aarti Singh
We consider a variant of the best arm identification (BAI) problem in multi-armed bandits (MAB) in which there are two sets of arms (source and target), and the objective is to determine the best target arm while only pulling source arms.
2 code implementations • 17 Nov 2021 • Robin Dunn, Aditya Gangrade, Larry Wasserman, Aaditya Ramdas
Shape constraints yield flexible middle grounds between fully nonparametric and fully parametric approaches to modeling distributions of data.
no code implementations • ICLR 2022 • Aleksandr Podkopaev, Aaditya Ramdas
When deployed in the real world, machine learning models inevitably encounter changes in the data distribution, and certain -- but not all -- distribution shifts could result in significant performance degradation.
1 code implementation • 30 Sep 2021 • Yo Joong Choe, Aaditya Ramdas
Consider two forecasters, each making a single prediction for a sequence of events over time.
1 code implementation • ICLR 2022 • Chirag Gupta, Aaditya Ramdas
We propose top-label calibration as a rectification of confidence calibration.
1 code implementation • NeurIPS 2021 • Ziyu Xu, Ruodu Wang, Aaditya Ramdas
In bandit multiple hypothesis testing, each arm corresponds to a different null hypothesis that we wish to test, and the goal is to design adaptive algorithms that correctly identify large set of interesting arms (true discoveries), while only mistakenly identifying a few uninteresting ones (false discoveries).
1 code implementation • 16 Mar 2021 • Tudor Manole, Aaditya Ramdas
We present a unified technique for sequential estimation of convex divergences between distributions, including integral probability metrics like the kernel maximum mean discrepancy, $\varphi$-divergences like the Kullback-Leibler divergence, and optimal transport costs, such as powers of Wasserstein distances.
2 code implementations • 11 Mar 2021 • Ian Waudby-Smith, David Arbour, Ritwik Sinha, Edward H. Kennedy, Aaditya Ramdas
This paper introduces time-uniform analogues of such asymptotic confidence intervals, adding to the literature on confidence sequences (CS) -- sequences of confidence intervals that are uniformly valid over time -- which provide valid inference at arbitrary stopping times and incur no penalties for "peeking" at the data, unlike classical confidence intervals which require the sample size to be fixed in advance.
no code implementations • 4 Mar 2021 • Aleksandr Podkopaev, Aaditya Ramdas
Piggybacking on recent progress in addressing label shift (for better prediction), we examine the right way to achieve UQ by reweighting the aforementioned conformal and calibration procedures whenever some unlabeled data from the target distribution is available.
1 code implementation • 22 Feb 2021 • Jinjin Tian, Xu Chen, Eugene Katsevich, Jelle Goeman, Aaditya Ramdas
Simultaneous inference allows for the exploration of data while deciding on criteria for proclaiming discoveries.
Statistics Theory Methodology Statistics Theory
no code implementations • 18 Feb 2021 • Nikos Karampatziakis, Paul Mineiro, Aaditya Ramdas
We develop confidence bounds that hold uniformly over time for off-policy evaluation in the contextual bandit setting.
no code implementations • 10 Nov 2020 • Ilmun Kim, Aaditya Ramdas
Classical asymptotic theory for statistical inference usually involves calibrating a statistic by fixing the dimension $d$ while letting the sample size $n$ increase to infinity.
1 code implementation • 26 Oct 2020 • Ziyu Xu, Aaditya Ramdas
This statistical advance is enabled by the development of new algorithmic ideas: earlier algorithms are more "static" while our new ones allow for the dynamical adjustment of testing levels based on the amount of wealth the algorithm has accumulated.
3 code implementations • 19 Oct 2020 • Ian Waudby-Smith, Aaditya Ramdas
This paper derives confidence intervals (CI) and time-uniform confidence sequences (CS) for the classical problem of estimating an unknown mean from bounded observations.
1 code implementation • NeurIPS 2020 • Chirag Gupta, Aleksandr Podkopaev, Aaditya Ramdas
We study three notions of uncertainty quantification -- calibration, confidence intervals and prediction sets -- for binary classification in the distribution-free setting, that is without making any distributional assumptions on the data.
1 code implementation • 15 Jun 2020 • Eugene Katsevich, Aaditya Ramdas
Conditional independence testing is an important problem, yet provably hard without assumptions.
1 code implementation • 12 Jun 2020 • Willie Neiswanger, Aaditya Ramdas
There is a necessary cost to achieving robustness: if the prior was correct, posterior GP bands are narrower than our CS.
3 code implementations • NeurIPS 2020 • Ian Waudby-Smith, Aaditya Ramdas
We then present Hoeffding- and empirical-Bernstein-type time-uniform CSs and fixed-time confidence intervals for sampling WoR, which improve on previous bounds in the literature and explicitly quantify the benefit of WoR sampling.
1 code implementation • 6 Jun 2020 • Molei Liu, Eugene Katsevich, Lucas Janson, Aaditya Ramdas
We propose the distilled CRT, a novel approach to using state-of-the-art machine learning algorithms in the CRT while drastically reducing the number of times those algorithms need to be run, thereby taking advantage of their power and the CRT's statistical guarantees without suffering the usual computational expense.
Methodology
1 code implementation • 12 May 2020 • Eugene Katsevich, Aaditya Ramdas
For testing conditional independence (CI) of a response Y and a predictor X given covariates Z, the recently introduced model-X (MX) framework has been the subject of active methodological research, especially in the context of MX knockoffs and their successful application to genome-wide association studies.
1 code implementation • ICML 2020 • Boyan Duan, Aaditya Ramdas, Larry Wasserman
We propose a method for multiple hypothesis testing with familywise error rate (FWER) control, called the i-FWER test.
Methodology
no code implementations • ICML 2020 • Jaehyeok Shin, Aaditya Ramdas, Alessandro Rinaldo
The bias of the sample means of the arms in multi-armed bandits is an important issue in adaptive data analysis that has recently received considerable attention in the literature.
no code implementations • 24 Dec 2019 • Larry Wasserman, Aaditya Ramdas, Sivaraman Balakrishnan
Constructing tests and confidence sets for such models is notoriously difficult.
no code implementations • 11 Oct 2019 • Tijana Zrnic, Daniel L. Jiang, Aaditya Ramdas, Michael. I. Jordan
One important partition of algorithms for controlling the false discovery rate (FDR) in multiple testing is into offline and online algorithms.
1 code implementation • 10 Oct 2019 • Jinjin Tian, Aaditya Ramdas
Biological research often involves testing a growing number of null hypotheses as new data is accumulated over time.
no code implementations • 2 Aug 2019 • Chirag Gupta, Sivaraman Balakrishnan, Aaditya Ramdas
We derive bounds on the path length $\zeta$ of gradient descent (GD) and gradient flow (GF) curves for various classes of smooth convex and nonconvex functions.
4 code implementations • 24 Jun 2019 • Steven R. Howard, Aaditya Ramdas
We propose confidence sequences -- sequences of confidence intervals which are valid uniformly over time -- for quantiles of any distribution over a complete, fully-ordered set, based on a stream of i. i. d.
no code implementations • NeurIPS 2019 • Jaehyeok Shin, Aaditya Ramdas, Alessandro Rinaldo
It is well known that in stochastic multi-armed bandits (MAB), the sample mean of an arm is typically not an unbiased estimator of its true mean.
1 code implementation • NeurIPS 2019 • Jinjin Tian, Aaditya Ramdas
Major internet companies routinely perform tens of thousands of A/B tests each year.
no code implementations • 8 May 2019 • Rina Foygel Barber, Emmanuel J. Candes, Aaditya Ramdas, Ryan J. Tibshirani
This paper introduces the jackknife+, which is a novel method for constructing predictive confidence intervals.
Methodology
1 code implementation • NeurIPS 2019 • Rina Foygel Barber, Emmanuel J. Candes, Aaditya Ramdas, Ryan J. Tibshirani
We extend conformal prediction methodology beyond the case of exchangeable data.
Methodology
no code implementations • 24 Mar 2019 • Veeranjaneyulu Sadhanala, Yu-Xiang Wang, Aaditya Ramdas, Ryan J. Tibshirani
We present an extension of the Kolmogorov-Smirnov (KS) two-sample test, which can be more sensitive to differences in the tails.
no code implementations • 12 Mar 2019 • Rina Foygel Barber, Emmanuel J. Candès, Aaditya Ramdas, Ryan J. Tibshirani
We consider the problem of distribution-free predictive inference, with the goal of producing predictive coverage guarantees that hold conditionally rather than marginally.
Statistics Theory Statistics Theory
no code implementations • 2 Feb 2019 • Jaehyeok Shin, Aaditya Ramdas, Alessandro Rinaldo
For example, when is it consistent, how large is its bias, and can we bound its mean squared error?
2 code implementations • 12 Dec 2018 • Tijana Zrnic, Aaditya Ramdas, Michael. I. Jordan
We consider the problem of asynchronous online testing, aimed at providing control of the false discovery rate (FDR) during a continual stream of data collection and testing, where each test may be a sequential test that can start and stop at arbitrary times.
4 code implementations • 18 Oct 2018 • Steven R. Howard, Aaditya Ramdas, Jon McAuliffe, Jasjeet Sekhon
A confidence sequence is a sequence of confidence intervals that is uniformly valid over an unbounded time horizon.
Statistics Theory Probability Methodology Statistics Theory
1 code implementation • 19 Mar 2018 • Eugene Katsevich, Aaditya Ramdas
In this paper, we show that the entire path of rejection sets considered by a variety of existing FDR procedures (like BH, knockoffs, and many others) can be endowed with simultaneous high-probability bounds on FDP.
Statistics Theory Statistics Theory
1 code implementation • ICML 2018 • Aaditya Ramdas, Tijana Zrnic, Martin Wainwright, Michael Jordan
However, unlike older methods, SAFFRON's threshold sequence is based on a novel estimate of the alpha fraction that it allocates to true null hypotheses.
1 code implementation • NeurIPS 2017 • Aaditya Ramdas, Fanny Yang, Martin J. Wainwright, Michael. I. Jordan
In the online multiple testing problem, p-values corresponding to different null hypotheses are observed one by one, and the decision of whether or not to reject the current hypothesis must be made immediately, after which the next p-value is observed.
1 code implementation • 29 Sep 2017 • Aaditya Ramdas, Jianbo Chen, Martin J. Wainwright, Michael. I. Jordan
We propose a linear-time, single-pass, top-down algorithm for multiple testing on directed acyclic graphs (DAGs), where nodes represent hypotheses and edges specify a partial ordering in which hypotheses must be tested.
1 code implementation • NeurIPS 2017 • Fanny Yang, Aaditya Ramdas, Kevin Jamieson, Martin J. Wainwright
We propose an alternative framework to existing setups for controlling false alarms when multiple A/B tests are run over time.
no code implementations • 18 Mar 2017 • Aaditya Ramdas, Rina Foygel Barber, Martin J. Wainwright, Michael. I. Jordan
There is a significant literature on methods for incorporating knowledge into multiple testing procedures so as to improve their power and precision.
1 code implementation • 14 Nov 2016 • Danica J. Sutherland, Hsiao-Yu Tung, Heiko Strathmann, Soumyajit De, Aaditya Ramdas, Alex Smola, Arthur Gretton
In this context, the MMD may be used in two roles: first, as a discriminator, either directly on the samples, or on features of the samples.
no code implementations • 6 May 2016 • Maxim Rabinovich, Aaditya Ramdas, Michael. I. Jordan, Martin J. Wainwright
These results show that it is possible for empirical expectations of functions to concentrate long before the underlying chain has mixed in the classical sense, and we show that the concentration rates we achieve are optimal up to constants.
no code implementations • 25 Mar 2016 • Horia Mania, Aaditya Ramdas, Martin J. Wainwright, Michael. I. Jordan, Benjamin Recht
This paper studies the use of reproducing kernel Hilbert space methods for learning from permutation-valued features.
no code implementations • 2 Mar 2016 • Ahmed El Alaoui, Xiang Cheng, Aaditya Ramdas, Martin J. Wainwright, Michael. I. Jordan
Together, these properties show that $p = d+1$ is an optimal choice, yielding a function estimate $\hat{f}$ that is both smooth and non-degenerate, while remaining maximally sensitive to $P$.
no code implementations • 6 Feb 2016 • Ilmun Kim, Aaditya Ramdas, Aarti Singh, Larry Wasserman
We prove two results that hold for all classifiers in any dimensions: if its true error remains $\epsilon$-better than chance for some $\epsilon>0$ as $d, n \to \infty$, then (a) the permutation-based test is consistent (has power approaching to one), (b) a computationally efficient test based on a Gaussian approximation of the null distribution is also consistent.
no code implementations • 23 Jan 2016 • Aaditya Ramdas, David Isenberg, Aarti Singh, Larry Wasserman
Linear independence testing is a fundamental information-theoretic and statistical problem that can be posed as follows: given $n$ points $\{(X_i, Y_i)\}^n_{i=1}$ from a $p+q$ dimensional multivariate distribution where $X_i \in \mathbb{R}^p$ and $Y_i \in\mathbb{R}^q$, determine whether $a^T X$ and $b^T Y$ are uncorrelated for every $a \in \mathbb{R}^p, b\in \mathbb{R}^q$ or not.
no code implementations • 10 Dec 2015 • Rina Foygel Barber, Aaditya Ramdas
In many practical applications of multiple hypothesis testing using the False Discovery Rate (FDR), the given hypotheses can be naturally partitioned into groups, and one may not only want to control the number of false discoveries (wrongly rejected null hypotheses), but also the number of falsely discovered groups of hypotheses (we say a group is falsely discovered if at least one hypothesis within that group is rejected, when in reality the group contains only nulls).
1 code implementation • 8 Sep 2015 • Aaditya Ramdas, Nicolas Garcia, Marco Cuturi
In this work, our central object is the Wasserstein distance, as we form a chain of connections from univariate methods like the Kolmogorov-Smirnov test, PP/QQ plots and ROC/ODC curves, to multivariate tests involving energy statistics and kernel based maximum mean discrepancy.
no code implementations • 4 Aug 2015 • Aaditya Ramdas, Sashank J. Reddi, Barnabas Poczos, Aarti Singh, Larry Wasserman
We formally characterize the power of popular tests for GDA like the Maximum Mean Discrepancy with the Gaussian kernel (gMMD) and bandwidth-dependent variants of the Energy Distance with the Euclidean norm (eED) in the high-dimensional MDA regime.
1 code implementation • NeurIPS 2015 • Kacper Chwialkowski, Aaditya Ramdas, Dino Sejdinovic, Arthur Gretton
The new tests are consistent against a larger class of alternatives than the previous linear-time tests based on the (non-smoothed) empirical characteristic functions, while being much faster than the current state-of-the-art quadratic-time kernel-based or energy distance-based tests.
1 code implementation • 10 Jun 2015 • Akshay Balsubramani, Aaditya Ramdas
It is novel in several ways: (a) it takes linear time and constant space to compute on the fly, (b) it has the same power guarantee as a non-sequential version of the test with the same computational constraints up to a small factor, and (c) it accesses only as many samples as are required - its stopping time adapts to the unknown difficulty of the problem.
no code implementations • 15 May 2015 • Aaditya Ramdas, Barnabas Poczos, Aarti Singh, Larry Wasserman
For larger $\sigma$, the \textit{unflattening} of the regression function on convolution with uniform noise, along with its local antisymmetry around the threshold, together yield a behaviour where noise \textit{appears} to be beneficial.
no code implementations • 15 May 2015 • Aaditya Ramdas, Javier Peña
This allows us to give guarantees for a primal-dual algorithm that halts in $\min\{\tfrac{\sqrt n}{|\rho|}, \tfrac{\sqrt n}{\epsilon}\}$ iterations with a perfect separator in the RKHS if the primal is feasible or a dual $\epsilon$-certificate of near-infeasibility.
no code implementations • 15 May 2015 • Aaditya Ramdas, Aarti Singh
Combining these two parts yields an algorithm that solves stochastic convex optimization of uniformly convex and smooth functions using only noisy gradient signs by repeatedly performing active learning, achieves optimal rates and is adaptive to all unknown convexity and smoothness parameters.
no code implementations • 23 Nov 2014 • Aaditya Ramdas, Sashank J. Reddi, Barnabas Poczos, Aarti Singh, Larry Wasserman
The current literature is split into two kinds of tests - those which are consistent without any assumptions about how the distributions may differ (\textit{general} alternatives), and those which are designed to specifically test easier alternatives, like a difference in means (\textit{mean-shift} alternatives).
no code implementations • 20 Jun 2014 • Aaditya Ramdas
This paper is about randomized iterative algorithms for solving a linear system of equations $X \beta = y$ in different settings.
no code implementations • 20 Jun 2014 • Aaditya Ramdas, Javier Peña
Given a matrix $A$, a linear feasibility problem (of which linear classification is a special case) aims to find a solution to a primal problem $w: A^Tw > \textbf{0}$ or a certificate for the dual problem which is a probability distribution $p: Ap = \textbf{0}$.
4 code implementations • 9 Jun 2014 • Aaditya Ramdas, Ryan J. Tibshirani
This paper presents a fast and robust algorithm for trend filtering, a recently developed nonparametric regression tool.
no code implementations • 9 Jun 2014 • Sashank J. Reddi, Aaditya Ramdas, Barnabás Póczos, Aarti Singh, Larry Wasserman
This paper is about two related decision theoretic problems, nonparametric two-sample testing and independence testing.
no code implementations • 7 Jun 2014 • Aaditya Ramdas, Leila Wehbe
This paper deals with the problem of nonparametric independence testing, a fundamental decision-theoretic problem that asks if two arbitrary (possibly multivariate) random variables $X, Y$ are independent or not, a question that comes up in many fields like causality and neuroscience.