no code implementations • NeurIPS 2021 • Ilias Diakonikolas, Daniel M. Kane, Ankit Pensia, Thanasis Pittas, Alistair Stewart
We study the problem of list-decodable linear regression, where an adversary can corrupt a majority of the examples.
no code implementations • 3 Feb 2021 • Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart, Yuxin Sun
We study the problem of learning Ising models satisfying Dobrushin's condition in the outlier-robust setting where a constant fraction of the samples are adversarially corrupted.
3 code implementations • NeurIPS 2019 • Ilias Diakonikolas, Sushrut Karmalkar, Daniel Kane, Eric Price, Alistair Stewart
Specifically, we focus on the fundamental problems of robust sparse mean estimation and robust sparse PCA.
no code implementations • NeurIPS 2018 • Alistair Stewart, Ilias Diakonikolas, Clement Canonne
We study the general problem of testing whether an unknown discrete distribution belongs to a specified family of distributions.
no code implementations • 31 May 2018 • Ilias Diakonikolas, Weihao Kong, Alistair Stewart
An error of $\Omega (\epsilon \sigma)$ is information-theoretically necessary, even with infinite sample size.
1 code implementation • 7 Mar 2018 • Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Jacob Steinhardt, Alistair Stewart
In high dimensions, most machine learning methods are brittle to even a small fraction of structured outliers.
no code implementations • 28 Feb 2018 • Timothy Carpenter, Ilias Diakonikolas, Anastasios Sidiropoulos, Alistair Stewart
Prior to this work, no finite sample upper bound was known for this estimator in more than $3$ dimensions.
no code implementations • 20 Nov 2017 • Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart
We give a learning algorithm for mixtures of spherical Gaussians that succeeds under significantly weaker separation assumptions compared to prior work.
no code implementations • NeurIPS 2018 • Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart
We study the problem of generalized uniformity testing \cite{BC17} of a discrete probability distribution: Given samples from a probability distribution $p$ over an {\em unknown} discrete domain $\mathbf{\Omega}$, we want to distinguish, with probability at least $2/3$, between the case that $p$ is uniform on some {\em subset} of $\mathbf{\Omega}$ versus $\epsilon$-far, in total variation distance, from any such uniform distribution.
no code implementations • 5 Jul 2017 • Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart
We give the first polynomial-time PAC learning algorithms for these concept classes with dimension-independent error guarantees in the presence of nasty noise under the Gaussian distribution.
no code implementations • 12 Apr 2017 • Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Ankur Moitra, Alistair Stewart
We give robust estimators that achieve estimation error $O(\varepsilon)$ in the total variation distance, which is optimal up to a universal constant that is independent of the dimension.
2 code implementations • ICML 2017 • Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Ankur Moitra, Alistair Stewart
Robust estimation is much more challenging in high dimensions than it is in one dimension: Most techniques either lead to intractable optimization problems or estimators that can tolerate only a tiny fraction of errors.
no code implementations • 9 Dec 2016 • Clement Canonne, Ilias Diakonikolas, Daniel Kane, Alistair Stewart
This work initiates a systematic investigation of testing high-dimensional structured distributions by focusing on testing Bayesian networks -- the prototypical family of directed graphical models.
no code implementations • 10 Nov 2016 • Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart
For each of these problems, we show a {\em super-polynomial gap} between the (information-theoretic) sample complexity and the computational complexity of {\em any} Statistical Query algorithm for the problem.
1 code implementation • NeurIPS 2018 • Yu Cheng, Ilias Diakonikolas, Daniel Kane, Alistair Stewart
We investigate the problem of learning Bayesian networks in a robust model where an $\epsilon$-fraction of the samples are adversarially corrupted.
no code implementations • 9 Jun 2016 • Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart
We study the {\em robust proper learning} of univariate log-concave distributions (over continuous and discrete domains).
no code implementations • 26 May 2016 • Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart
Prior to our work, no upper bound on the sample complexity of this learning problem was known for the case of $d>3$.
2 code implementations • 21 Apr 2016 • Ilias Diakonikolas, Gautam Kamath, Daniel Kane, Jerry Li, Ankur Moitra, Alistair Stewart
We study high-dimensional distribution learning in an agnostic setting where an adversary is allowed to arbitrarily corrupt an $\varepsilon$-fraction of the samples.
no code implementations • 12 Nov 2015 • Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart
Given $\widetilde{O}(1/\epsilon^2)$ samples from an unknown PBD $\mathbf{p}$, our algorithm runs in time $(1/\epsilon)^{O(\log \log (1/\epsilon))}$, and outputs a hypothesis PBD that is $\epsilon$-close to $\mathbf{p}$ in total variation distance.
no code implementations • 11 Nov 2015 • Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart
An $(n, k)$-Poisson Multinomial Distribution (PMD) is a random variable of the form $X = \sum_{i=1}^n X_i$, where the $X_i$'s are independent random vectors supported on the set of standard basis vectors in $\mathbb{R}^k.$ In this paper, we obtain a refined structural understanding of PMDs by analyzing their Fourier transform.
no code implementations • 4 May 2015 • Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart
As one of our main structural contributions, we give an efficient algorithm to construct a sparse {\em proper} $\epsilon$-cover for ${\cal S}_{n, k},$ in total variation distance.