no code implementations • 16 Jan 2025 • Ilias Diakonikolas, Nikos Zarifis
We study the problem of PAC learning $\gamma$-margin halfspaces in the presence of Massart noise.
no code implementations • 9 Jan 2025 • Ilias Diakonikolas, Daniel M. Kane, Sihan Liu, Thanasis Pittas
Specifically, given $N$ independent random points $x_1,\ldots, x_N$ in $\mathbb{R}^D$ and a parameter $\alpha \in (0, 1)$ such that each $x_i$ is drawn from a Gaussian with mean $\mu$ and unknown covariance, and an unknown $\alpha$-fraction of the points have identity-bounded covariances, the goal is to estimate the common mean $\mu$.
no code implementations • 31 Dec 2024 • Ilias Diakonikolas, Daniel M. Kane, Mingchen Ma
Specifically, to beat the passive label complexity of $\tilde{O} (d/\epsilon)$, an active learner requires a pool of $2^{poly(d)}$ unlabeled samples.
no code implementations • 30 Dec 2024 • Ilias Diakonikolas, Samuel B. Hopkins, Ankit Pensia, Stefan Tiegel
As applications of our certification algorithm, we obtain new efficient algorithms for a wide range of well-studied algorithmic tasks.
no code implementations • 23 Nov 2024 • Ilias Diakonikolas, Daniel M. Kane
* Learning Mixtures of Spherical Gaussians.
no code implementations • 18 Nov 2024 • Ilias Diakonikolas, Lisheng Ren, Nikos Zarifis
We study the problem of PAC learning halfspaces in the reliable agnostic model of Kalai et al. (2012).
no code implementations • 11 Nov 2024 • Shuyao Li, Sushrut Karmalkar, Ilias Diakonikolas, Jelena Diakonikolas
More precisely, given training samples from a reference distribution $\mathcal{p}_0$, the goal is to approximate the vector $\mathbf{w}^*$ which minimizes the squared loss with respect to the worst-case distribution that is close in $\chi^2$-divergence to $\mathcal{p}_{0}$.
no code implementations • 8 Nov 2024 • Puqian Wang, Nikos Zarifis, Ilias Diakonikolas, Jelena Diakonikolas
Prior algorithmic work in this setting had focused on learning in the realizable case or in the presence of semi-random noise.
no code implementations • 28 Oct 2024 • Ilias Diakonikolas, Samuel B. Hopkins, Ankit Pensia, Stefan Tiegel
We prove that there is a universal constant $C>0$ so that for every $d \in \mathbb N$, every centered subgaussian distribution $\mathcal D$ on $\mathbb R^d$, and every even $p \in \mathbb N$, the $d$-variate polynomial $(Cp)^{p/2} \cdot \|v\|_{2}^p - \mathbb E_{X \sim \mathcal D} \langle v, X\rangle^p$ is a sum of square polynomials.
no code implementations • 28 Oct 2024 • Ilias Diakonikolas, Sushrut Karmalkar, Shuo Pang, Aaron Potechin
Specifically, given i. i. d.\ samples from a distribution $P^A_{v}$ on $\mathbb{R}^n$ that behaves like a known distribution $A$ in a hidden direction $v$ and like a standard Gaussian in the orthogonal complement, the goal is to approximate the hidden direction.
no code implementations • 30 Aug 2024 • Ilias Diakonikolas, Daniel M. Kane, Sihan Liu, Nikos Zarifis
We study the task of testable learning of general -- not necessarily homogeneous -- halfspaces with adversarial label noise with respect to the Gaussian distribution.
no code implementations • 21 May 2024 • Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis
Instead of assuming that the online adversary chooses an arbitrary sequence of labels, we assume that the context $\mathbf{x}$ is selected adversarially but the label $y$ presented to the learner disagrees with the ground-truth label of $\mathbf{x}$ with unknown probability at most $\eta$.
no code implementations • 31 Mar 2024 • Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Sihan Liu, Nikos Zarifis
We study the efficient learnability of low-degree polynomial threshold functions (PTFs) in the presence of a constant fraction of adversarial corruptions.
no code implementations • 15 Mar 2024 • Ilias Diakonikolas, Daniel M. Kane, Sushrut Karmalkar, Ankit Pensia, Thanasis Pittas
Concretely, for Gaussian robust $k$-sparse mean estimation on $\mathbb{R}^d$ with corruption rate $\epsilon>0$, our algorithm has sample complexity $(k^2/\epsilon^2)\mathrm{polylog}(d/\epsilon)$, runs in sample polynomial time, and approximates the target mean within $\ell_2$-error $O(\epsilon)$.
no code implementations • NeurIPS 2023 • Shuyao Li, Yu Cheng, Ilias Diakonikolas, Jelena Diakonikolas, Rong Ge, Stephen J. Wright
We introduce a general framework for efficiently finding an approximate SOSP with \emph{dimension-independent} accuracy guarantees, using $\widetilde{O}({D^2}/{\epsilon})$ samples where $D$ is the ambient dimension and $\epsilon$ is the fraction of corrupted datapoints.
no code implementations • NeurIPS 2023 • Ilias Diakonikolas, Daniel Kane, Lisheng Ren, Yuxin Sun
In particular, we prove near-optimal SQ lower bounds for NGCA under the moment-matching condition only.
no code implementations • 4 Mar 2024 • Ilias Diakonikolas, Daniel M. Kane, Thanasis Pittas, Nikos Zarifis
We study the problem of estimating the mean of an identity covariance Gaussian in the truncated setting, in the regime when the truncation set comes from a low-complexity family $\mathcal{C}$ of sets.
no code implementations • 27 Feb 2024 • Nikos Zarifis, Puqian Wang, Ilias Diakonikolas, Jelena Diakonikolas
We give an efficient learning algorithm, achieving a constant factor approximation to the optimal loss, that succeeds under a range of distributions (including log-concave distributions) and a broad class of monotone and Lipschitz link functions.
1 code implementation • 5 Feb 2024 • Xuefeng Du, Zhen Fang, Ilias Diakonikolas, Yixuan Li
Harnessing the power of unlabeled in-the-wild data is non-trivial due to the heterogeneity of both in-distribution (ID) and OOD data.
no code implementations • 27 Dec 2023 • Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis
In contrast, algorithms that rely only on random examples inherently require $d^{\mathrm{poly}(1/\epsilon)}$ samples and runtime, even for the basic problem of agnostically learning a single ReLU or a halfspace.
no code implementations • 19 Dec 2023 • Ilias Diakonikolas, Daniel M. Kane, Jasper C. H. Lee, Thanasis Pittas
Furthermore, under a variant of the "no large sub-cluster'' condition from in prior work [BKK22], we show that our algorithm outputs an accurate clustering, not just a refinement, even for general-weight mixtures.
no code implementations • NeurIPS 2023 • Ilias Diakonikolas, Daniel M. Kane, Ankit Pensia, Thanasis Pittas
We study the fundamental problems of Gaussian mean estimation and linear regression with Gaussian covariates in the presence of Huber contamination.
no code implementations • 22 Nov 2023 • Ilias Diakonikolas, Daniel M. Kane, Sihan Liu
Our main result is the first closeness tester for this problem with {\em sub-learning} sample complexity in any fixed dimension and a nearly-matching sample complexity lower bound.
no code implementations • 24 Oct 2023 • Daniel M. Kane, Ilias Diakonikolas, Hanshen Xiao, Sihan Liu
We note that if the algorithm is allowed to wait until time $T$ to report its estimate, this reduces to the well-studied problem of robust mean estimation.
no code implementations • 20 Sep 2023 • Ilias Diakonikolas, Sushrut Karmalkar, Jongho Park, Christos Tzamos
Our goal is to accurately recover a \new{parameter vector $w$ such that the} function $g(w \cdot x)$ \new{has} arbitrarily small error when compared to the true values $g(w^* \cdot x)$, rather than the noisy measurements $y$.
no code implementations • 6 Aug 2023 • Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis
In contrast, under a worst- or random-ordering, the number of mistakes must be at least $\Omega(d \log n)$, even when the points are drawn uniformly from the unit sphere and the learner only needs to predict the labels for $1\%$ of them.
no code implementations • 24 Jul 2023 • Ilias Diakonikolas, Daniel M. Kane
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/\epsilon)^{O(k)}$, where $\epsilon>0$ is the target accuracy.
no code implementations • 28 Jun 2023 • Ilias Diakonikolas, Jelena Diakonikolas, Daniel M. Kane, Puqian Wang, Nikos Zarifis
Our main result is a lower bound for Statistical Query (SQ) algorithms and low-degree polynomial tests suggesting that the quadratic dependence on $1/\epsilon$ in the sample complexity is inherent for computationally efficient algorithms.
no code implementations • 22 Jun 2023 • Ilias Diakonikolas, Daniel M. Kane, Thanasis Pittas, Nikos Zarifis
In the special case where the separation is on the order of $k^{1/2}$, we additionally obtain fine-grained SQ lower bounds with the correct exponent.
no code implementations • 13 Jun 2023 • Puqian Wang, Nikos Zarifis, Ilias Diakonikolas, Jelena Diakonikolas
We study the problem of learning a single neuron with respect to the $L_2^2$-loss in the presence of adversarial label noise.
no code implementations • 4 May 2023 • Ilias Diakonikolas, Daniel M. Kane, Ankit Pensia, Thanasis Pittas
Our main contribution is to develop a nearly-linear time algorithm for robust PCA with near-optimal error guarantees.
no code implementations • 13 Feb 2023 • Ilias Diakonikolas, Daniel M. Kane, Lisheng Ren
We study the task of agnostically learning halfspaces under the Gaussian distribution.
no code implementations • 21 Dec 2022 • Daniel M. Kane, Ilias Diakonikolas
We prove that for $c>0$ a sufficiently small universal constant that a random set of $c d^2/\log^4(d)$ independent Gaussian random points in $\mathbb{R}^d$ lie on a common ellipsoid with high probability.
no code implementations • 6 Dec 2022 • Ilias Diakonikolas, Christos Tzamos, Daniel M. Kane
By leveraging our strongly polynomial Forster algorithm, we obtain the first strongly polynomial time algorithm for {\em distribution-free} PAC learning of halfspaces.
no code implementations • 29 Nov 2022 • Ilias Diakonikolas, Daniel M. Kane, Jasper C. H. Lee, Ankit Pensia
We study the fundamental task of outlier-robust mean estimation for heavy-tailed distributions in the presence of sparsity.
no code implementations • 25 Oct 2022 • Ilias Diakonikolas, Daniel M. Kane, Ankit Pensia
Here we give an extremely simple algorithm for Gaussian mean testing with a one-page analysis.
no code implementations • 18 Oct 2022 • Ilias Diakonikolas, Daniel M. Kane, Lisheng Ren, Yuxin Sun
We study the problem of PAC learning a single neuron in the presence of Massart noise.
no code implementations • 28 Jul 2022 • Ilias Diakonikolas, Daniel M. Kane, Pasin Manurangsi, Lisheng Ren
We study the complexity of PAC learning halfspaces in the presence of Massart noise.
no code implementations • 14 Jul 2022 • Clément L. Canonne, Ilias Diakonikolas, Daniel M. Kane, Sihan Liu
We investigate the problem of testing whether a discrete probability distribution over an ordered domain is a histogram on a specified number of bins.
no code implementations • 17 Jun 2022 • Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis
For the ReLU activation, we give an efficient algorithm with sample complexity $\tilde{O}(d\, \polylog(1/\epsilon))$.
no code implementations • 10 Jun 2022 • Ilias Diakonikolas, Daniel M. Kane, Sushrut Karmalkar, Ankit Pensia, Thanasis Pittas
We study the problem of list-decodable sparse mean estimation.
no code implementations • 9 Jun 2022 • Ilias Diakonikolas, Daniel M. Kane, Yuxin Sun
We establish optimal Statistical Query (SQ) lower bounds for robustly learning certain families of discrete high-dimensional distributions.
no code implementations • 7 Jun 2022 • Ilias Diakonikolas, Daniel M. Kane, Sushrut Karmalkar, Ankit Pensia, Thanasis Pittas
In this work, we develop the first efficient algorithms for robust sparse mean estimation without a priori knowledge of the covariance.
no code implementations • 26 Apr 2022 • Ilias Diakonikolas, Daniel M. Kane, Ankit Pensia, Thanasis Pittas
In this work, we develop the first efficient streaming algorithms for high-dimensional robust statistics with near-optimal memory requirements (up to logarithmic factors).
no code implementations • 16 Dec 2021 • Ilias Diakonikolas, Daniel M. Kane
Non-Gaussian Component Analysis (NGCA) is the following distribution learning problem: Given i. i. d.
1 code implementation • 23 Sep 2021 • Yu Cheng, Ilias Diakonikolas, Rong Ge, Shivam Gupta, Daniel M. Kane, Mahdi Soltanolkotabi
We explore the connection between outlier-robust high-dimensional statistics and non-convex optimization in the presence of sparsity constraints, with a focus on the fundamental tasks of robust sparse mean estimation and robust sparse PCA.
no code implementations • NeurIPS 2021 • Ilias Diakonikolas, Jongho Park, Christos Tzamos
This supervised learning task is efficiently solvable in the realizable setting, but is known to be computationally hard with adversarial label noise.
no code implementations • 19 Aug 2021 • Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis
We study the general problem and establish the following: For $\eta <1/2$, we give a learning algorithm for general halfspaces with sample and computational complexity $d^{O_{\eta}(\log(1/\gamma))}\mathrm{poly}(1/\epsilon)$, where $\gamma =\max\{\epsilon, \min\{\mathbf{Pr}[f(\mathbf{x}) = 1], \mathbf{Pr}[f(\mathbf{x}) = -1]\} \}$ is the bias of the target halfspace $f$.
no code implementations • NeurIPS 2021 • Ilias Diakonikolas, Daniel M. Kane, Christos Tzamos
A Forster transform is an operation that turns a distribution into one with good anti-concentration properties.
no code implementations • NeurIPS 2021 • Ilias Diakonikolas, Daniel M. Kane, Ankit Pensia, Thanasis Pittas, Alistair Stewart
We study the problem of list-decodable linear regression, where an adversary can corrupt a majority of the examples.
no code implementations • 16 Jun 2021 • Ilias Diakonikolas, Daniel M. Kane, Daniel Kongsgaard, Jerry Li, Kevin Tian
We leverage this result, together with additional techniques, to obtain the first almost-linear time algorithms for clustering mixtures of $k$ separated well-behaved distributions, nearly-matching the statistical guarantees of spectral methods.
no code implementations • 14 Jun 2021 • Ilias Diakonikolas, Russell Impagliazzo, Daniel Kane, Rex Lei, Jessica Sorrell, Christos Tzamos
Our upper and lower bounds characterize the complexity of boosting in the distribution-independent PAC model with Massart noise.
no code implementations • 10 Feb 2021 • Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis
We study the problem of agnostically learning halfspaces under the Gaussian distribution.
no code implementations • 8 Feb 2021 • Ilias Diakonikolas, Daniel M. Kane, Thanasis Pittas, Nikos Zarifis
We study the problem of agnostic learning under the Gaussian distribution.
no code implementations • 3 Feb 2021 • Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart, Yuxin Sun
We study the problem of learning Ising models satisfying Dobrushin's condition in the outlier-robust setting where a constant fraction of the samples are adversarially corrupted.
no code implementations • 31 Dec 2020 • Ilias Diakonikolas, Daniel M. Kane
This lower bound is best possible, as $O(d^2)$ samples suffice to even robustly {\em learn} the covariance.
no code implementations • 17 Dec 2020 • Ilias Diakonikolas, Daniel M. Kane
The best known $\mathrm{poly}(d, 1/\epsilon)$-time algorithms for this problem achieve error of $\eta+\epsilon$, which can be far from the optimal bound of $\mathrm{OPT}+\epsilon$, where $\mathrm{OPT} = \mathbf{E}_{x \sim D_x} [\eta(x)]$.
no code implementations • 14 Dec 2020 • Ilias Diakonikolas, Daniel M. Kane
Our result is constructive yielding an algorithm to compute such an $\epsilon$-cover that runs in time $\mathrm{poly}(M)$.
no code implementations • 3 Dec 2020 • Ainesh Bakshi, Ilias Diakonikolas, He Jia, Daniel M. Kane, Pravesh K. Kothari, Santosh S. Vempala
We give a polynomial-time algorithm for the problem of robustly estimating a mixture of $k$ arbitrary Gaussians in $\mathbb{R}^d$, for any fixed $k$, in the presence of a constant fraction of arbitrary corruptions.
no code implementations • NeurIPS 2021 • Ilias Diakonikolas, Daniel M. Kane, Daniel Kongsgaard, Jerry Li, Kevin Tian
Our algorithm runs in time $\widetilde{O}(ndk)$ for all $k = O(\sqrt{d}) \cup \Omega(d)$, where $n$ is the size of the dataset.
no code implementations • 4 Oct 2020 • Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis
{\em We give the first polynomial-time algorithm for this fundamental learning problem.}
no code implementations • 14 Sep 2020 • Ilias Diakonikolas, Themis Gouleakis, Daniel M. Kane, John Peebles, Eric Price
To illustrate the generality of our methods, we give optimal algorithms for testing collections of distributions and testing closeness with unequal sized samples.
no code implementations • NeurIPS 2020 • Ilias Diakonikolas, Daniel M. Kane, Pasin Manurangsi
We study the computational complexity of adversarially robust proper learning of halfspaces in the distribution-independent agnostic PAC model, with a focus on $L_p$ perturbations.
no code implementations • NeurIPS 2020 • Ilias Diakonikolas, Daniel M. Kane, Ankit Pensia
We study the problem of outlier robust high-dimensional mean estimation under a finite covariance assumption, and more broadly under finite low-degree moment assumptions.
no code implementations • NeurIPS 2020 • Ilias Diakonikolas, Daniel M. Kane, Nikos Zarifis
We study the fundamental problems of agnostically learning halfspaces and ReLUs under Gaussian marginals.
no code implementations • 22 Jun 2020 • Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Nikos Zarifis
For the case of positive coefficients, we give the first polynomial-time algorithm for this learning problem for $k$ up to $\tilde{O}(\sqrt{\log d})$.
no code implementations • NeurIPS 2020 • Ilias Diakonikolas, Daniel M. Kane, Daniel Kongsgaard
We study the problem of {\em list-decodable mean estimation} for bounded covariance distributions.
no code implementations • NeurIPS 2020 • Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis
We study the problem of agnostically learning homogeneous halfspaces in the distribution-specific PAC model.
no code implementations • 11 Jun 2020 • Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis
In the Tsybakov noise model, each label is independently flipped with some probability which is controlled by an adversary.
no code implementations • 26 May 2020 • Ilias Diakonikolas, Surbhi Goel, Sushrut Karmalkar, Adam R. Klivans, Mahdi Soltanolkotabi
We consider the fundamental problem of ReLU regression, where the goal is to output the best fitting ReLU with respect to square loss given access to draws from some unknown distribution.
no code implementations • ICML 2020 • Omar Montasser, Surbhi Goel, Ilias Diakonikolas, Nathan Srebro
We study the problem of learning adversarially robust halfspaces in the distribution-independent setting.
no code implementations • 13 May 2020 • Ilias Diakonikolas, Samuel B. Hopkins, Daniel Kane, Sushrut Karmalkar
The key ingredients of this proof are a novel use of SoS-certifiable anti-concentration and a new characterization of pairs of Gaussians with small (dimension-independent) overlap in terms of their parameter distance.
no code implementations • ICML 2020 • Yu Cheng, Ilias Diakonikolas, Rong Ge, Mahdi Soltanolkotabi
We study the problem of high-dimensional robust mean estimation in the presence of a constant fraction of adversarial outliers.
1 code implementation • 24 Mar 2020 • Ilias Diakonikolas, Jerry Li, Anastasia Voloshinov
We study the fundamental problem of fixed design {\em multidimensional segmented regression}: Given noisy samples from a function $f$, promised to be piecewise linear on an unknown set of $k$ rectangles, we want to recover $f$ up to a desired accuracy in mean-squared error.
no code implementations • 13 Feb 2020 • Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis
We study the problem of learning halfspaces with Massart noise in the distribution-specific PAC model.
no code implementations • NeurIPS 2019 • Maryam Aliakbarpour, Ilias Diakonikolas, Daniel Kane, Ronitt Rubinfeld
In this paper, we use the framework of property testing to design algorithms to test the properties of the distribution that the data is drawn from with respect to differential privacy.
3 code implementations • NeurIPS 2019 • Ilias Diakonikolas, Sushrut Karmalkar, Daniel Kane, Eric Price, Alistair Stewart
Specifically, we focus on the fundamental problems of robust sparse mean estimation and robust sparse PCA.
no code implementations • 14 Nov 2019 • Ilias Diakonikolas, Daniel M. Kane
Learning in the presence of outliers is a fundamental problem in statistics.
no code implementations • NeurIPS 2019 • Ilias Diakonikolas, Daniel M. Kane, Pasin Manurangsi
We study the problem of {\em properly} learning large margin halfspaces in the agnostic PAC model.
no code implementations • NeurIPS 2019 • Ilias Diakonikolas, Themis Gouleakis, Christos Tzamos
The goal is to find a hypothesis $h$ that minimizes the misclassification error $\mathbf{Pr}_{(\mathbf{x}, y) \sim \mathcal{D}} \left[ h(\mathbf{x}) \neq y \right]$.
no code implementations • 11 Jun 2019 • Yu Cheng, Ilias Diakonikolas, Rong Ge, David Woodruff
We study the problem of estimating the covariance matrix of a high-dimensional distribution when a small constant fraction of the samples can be arbitrarily corrupted.
no code implementations • 11 Jun 2019 • Ilias Diakonikolas, Themis Gouleakis, Daniel M. Kane, Sankeerth Rao
We study distribution testing with communication and memory constraints in the following computational models: (1) The {\em one-pass streaming model} where the goal is to minimize the sample complexity of the protocol subject to a memory constraint, and (2) A {\em distributed model} where the data samples reside at multiple machines and the goal is to minimize the communication cost of the protocol.
no code implementations • NeurIPS 2019 • Kai Zheng, Haipeng Luo, Ilias Diakonikolas, Li-Wei Wang
We propose the first reduction-based approach to obtaining long-term memory guarantees for online learning in the sense of Bousquet and Warmuth, 2002, by reducing the problem to achieving typical switching regret.
no code implementations • 31 Dec 2018 • Ilias Diakonikolas, Chrystalla Pavlou
In this work, we study the computational complexity of the inverse problem when the power index belongs to the class of semivalues.
no code implementations • NeurIPS 2018 • Alistair Stewart, Ilias Diakonikolas, Clement Canonne
We study the general problem of testing whether an unknown discrete distribution belongs to a specified family of distributions.
no code implementations • 23 Nov 2018 • Yu Cheng, Ilias Diakonikolas, Rong Ge
We study the fundamental problem of high-dimensional mean estimation in a robust model where a constant fraction of the samples are adversarially corrupted.
no code implementations • 7 Nov 2018 • Ilias Diakonikolas, Daniel M. Kane
Our robust identifiability result gives the following algorithmic applications: First, we show that Boolean degree-$d$ PTFs can be efficiently approximately reconstructed from approximations to their degree-$d$ Chow parameters.
no code implementations • ICML 2018 • Maryam Aliakbarpour, Ilias Diakonikolas, Ronitt Rubinfeld
Our theoretical results significantly improve over the best known algorithms for identity testing, and are the first results for private equivalence testing.
no code implementations • 31 May 2018 • Ilias Diakonikolas, Weihao Kong, Alistair Stewart
An error of $\Omega (\epsilon \sigma)$ is information-theoretically necessary, even with infinite sample size.
no code implementations • 10 Apr 2018 • Ilias Diakonikolas, Daniel M. Kane, John Peebles
We give the first identity tester for this problem with {\em sub-learning} sample complexity in any fixed dimension and a nearly-matching sample complexity lower bound.
1 code implementation • 7 Mar 2018 • Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Jacob Steinhardt, Alistair Stewart
In high dimensions, most machine learning methods are brittle to even a small fraction of structured outliers.
no code implementations • 28 Feb 2018 • Timothy Carpenter, Ilias Diakonikolas, Anastasios Sidiropoulos, Alistair Stewart
Prior to this work, no finite sample upper bound was known for this estimator in more than $3$ dimensions.
no code implementations • 23 Feb 2018 • Ilias Diakonikolas, Jerry Li, Ludwig Schmidt
We give an algorithm for this learning problem that uses $n = \tilde{O}_d(k/\epsilon^2)$ samples and runs in time $\tilde{O}_d(n)$.
no code implementations • NeurIPS 2017 • Ilias Diakonikolas, Elena Grigorescu, Jerry Li, Abhiram Natarajan, Krzysztof Onak, Ludwig Schmidt
For the case of structured distributions, such as k-histograms and monotone distributions, we design distributed learning algorithms that achieve significantly better communication guarantees than the naive ones, and obtain tight upper and lower bounds in several regimes.
no code implementations • 20 Nov 2017 • Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart
We give a learning algorithm for mixtures of spherical Gaussians that succeeds under significantly weaker separation assumptions compared to prior work.
no code implementations • NeurIPS 2018 • Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart
We study the problem of generalized uniformity testing \cite{BC17} of a discrete probability distribution: Given samples from a probability distribution $p$ over an {\em unknown} discrete domain $\mathbf{\Omega}$, we want to distinguish, with probability at least $2/3$, between the case that $p$ is uniform on some {\em subset} of $\mathbf{\Omega}$ versus $\epsilon$-far, in total variation distance, from any such uniform distribution.
no code implementations • 9 Aug 2017 • Ilias Diakonikolas, Themis Gouleakis, John Peebles, Eric Price
Our new upper and lower bounds show that the optimal sample complexity of identity testing is \[ \Theta\left( \frac{1}{\epsilon^2}\left(\sqrt{n \log(1/\delta)} + \log(1/\delta) \right)\right) \] for any $n, \varepsilon$, and $\delta$.
no code implementations • 18 Jul 2017 • Maryam Aliakbarpour, Ilias Diakonikolas, Ronitt Rubinfeld
We investigate the problems of identity and closeness testing over a discrete population from random samples.
no code implementations • 5 Jul 2017 • Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart
We give the first polynomial-time PAC learning algorithms for these concept classes with dimension-independent error guarantees in the presence of nasty noise under the Gaussian distribution.
no code implementations • 12 Apr 2017 • Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Ankur Moitra, Alistair Stewart
We give robust estimators that achieve estimation error $O(\varepsilon)$ in the total variation distance, which is optimal up to a universal constant that is independent of the dimension.
no code implementations • 6 Mar 2017 • Ilias Diakonikolas, Daniel M. Kane, Vladimir Nikishkin
Given a set of samples from two $k$-histogram distributions $p, q$ over $[n]$, we want to distinguish (with high probability) between the cases that $p = q$ and $\|p-q\|_1 \geq \epsilon$.
2 code implementations • ICML 2017 • Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Ankur Moitra, Alistair Stewart
Robust estimation is much more challenging in high dimensions than it is in one dimension: Most techniques either lead to intractable optimization problems or estimators that can tolerate only a tiny fraction of errors.
no code implementations • 9 Dec 2016 • Clement Canonne, Ilias Diakonikolas, Daniel Kane, Alistair Stewart
This work initiates a systematic investigation of testing high-dimensional structured distributions by focusing on testing Bayesian networks -- the prototypical family of directed graphical models.
no code implementations • 11 Nov 2016 • Ilias Diakonikolas, Themis Gouleakis, John Peebles, Eric Price
We study the fundamental problems of (i) uniformity testing of a discrete distribution, and (ii) closeness testing between two discrete distributions with bounded $\ell_2$-norm.
no code implementations • 10 Nov 2016 • Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart
For each of these problems, we show a {\em super-polynomial gap} between the (information-theoretic) sample complexity and the computational complexity of {\em any} Statistical Query algorithm for the problem.
no code implementations • 14 Jul 2016 • Jayadev Acharya, Ilias Diakonikolas, Jerry Li, Ludwig Schmidt
We study the fixed design segmented regression problem: Given noisy samples from a piecewise linear function $f$, we want to recover $f$ up to a desired accuracy in mean-squared error.
1 code implementation • NeurIPS 2018 • Yu Cheng, Ilias Diakonikolas, Daniel Kane, Alistair Stewart
We investigate the problem of learning Bayesian networks in a robust model where an $\epsilon$-fraction of the samples are adversarially corrupted.
no code implementations • 9 Jun 2016 • Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart
We study the {\em robust proper learning} of univariate log-concave distributions (over continuous and discrete domains).
no code implementations • 26 May 2016 • Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart
Prior to our work, no upper bound on the sample complexity of this learning problem was known for the case of $d>3$.
2 code implementations • 21 Apr 2016 • Ilias Diakonikolas, Gautam Kamath, Daniel Kane, Jerry Li, Ankur Moitra, Alistair Stewart
We study high-dimensional distribution learning in an agnostic setting where an adversary is allowed to arbitrarily corrupt an $\varepsilon$-fraction of the samples.
no code implementations • NeurIPS 2015 • Ilias Diakonikolas, Moritz Hardt, Ludwig Schmidt
We investigate the problem of learning an unknown probability distribution over a discrete population from random samples.
no code implementations • 12 Nov 2015 • Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart
Given $\widetilde{O}(1/\epsilon^2)$ samples from an unknown PBD $\mathbf{p}$, our algorithm runs in time $(1/\epsilon)^{O(\log \log (1/\epsilon))}$, and outputs a hypothesis PBD that is $\epsilon$-close to $\mathbf{p}$ in total variation distance.
no code implementations • 11 Nov 2015 • Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart
An $(n, k)$-Poisson Multinomial Distribution (PMD) is a random variable of the form $X = \sum_{i=1}^n X_i$, where the $X_i$'s are independent random vectors supported on the set of standard basis vectors in $\mathbb{R}^k.$ In this paper, we obtain a refined structural understanding of PMDs by analyzing their Fourier transform.
no code implementations • 1 Jun 2015 • Jayadev Acharya, Ilias Diakonikolas, Jerry Li, Ludwig Schmidt
Let $f$ be the density function of an arbitrary univariate distribution, and suppose that $f$ is $\mathrm{OPT}$-close in $L_1$-distance to an unknown piecewise polynomial function with $t$ interval pieces and degree $d$.
no code implementations • 4 May 2015 • Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart
As one of our main structural contributions, we give an efficient algorithm to construct a sparse {\em proper} $\epsilon$-cover for ${\cal S}_{n, k},$ in total variation distance.
no code implementations • NeurIPS 2014 • Siu-On Chan, Ilias Diakonikolas, Rocco A. Servedio, Xiaorui Sun
The "approximation factor" $C$ in our result is inherent in the problem, as we prove that no algorithm with sample size bounded in terms of $k$ and $\epsilon$ can achieve $C<2$ regardless of what kind of hypothesis distribution it uses.
no code implementations • 19 Aug 2013 • Siu-On Chan, Ilias Diakonikolas, Gregory Valiant, Paul Valiant
We study the question of closeness testing for two discrete distributions.
no code implementations • 14 May 2013 • Siu-On Chan, Ilias Diakonikolas, Rocco A. Servedio, Xiaorui Sun
We give an algorithm that draws $\tilde{O}(t\new{(d+1)}/\eps^2)$ samples from $p$, runs in time $\poly(t, d, 1/\eps)$, and with high probability outputs a piecewise polynomial hypothesis distribution $h$ that is $(O(\tau)+\eps)$-close (in total variation distance) to $p$.
no code implementations • 7 Nov 2012 • Anindya De, Ilias Diakonikolas, Rocco A. Servedio
In such an inverse problem, the algorithm is given uniform random satisfying assignments of an unknown function $f$ belonging to a class $\C$ of Boolean functions, and the goal is to output a probability distribution $D$ which is $\epsilon$-close, in total variation distance, to the uniform distribution over $f^{-1}(1)$.
no code implementations • 13 Jul 2011 • Constantinos Daskalakis, Ilias Diakonikolas, Rocco A. Servedio
The learning algorithm is given access to independent samples drawn from an unknown $k$-modal distribution $p$, and it must output a hypothesis distribution $\widehat{p}$ such that with high probability the total variation distance between $p$ and $\widehat{p}$ is at most $\epsilon.$ Our main goal is to obtain \emph{computationally efficient} algorithms for this problem that use (close to) an information-theoretically optimal number of samples.
no code implementations • 13 Jul 2011 • Constantinos Daskalakis, Ilias Diakonikolas, Rocco A. Servedio
Our second main result is a {\em proper} learning algorithm that learns to $\eps$-accuracy using $\tilde{O}(1/\eps^2)$ samples, and runs in time $(1/\eps)^{\poly (\log (1/\eps))} \cdot \log n$.