Search Results for author: Daniel M. Kane

Found 76 papers, 3 papers with code

Efficient Testable Learning of General Halfspaces with Adversarial Label Noise

no code implementations30 Aug 2024 Ilias Diakonikolas, Daniel M. Kane, Sihan Liu, Nikos Zarifis

We study the task of testable learning of general -- not necessarily homogeneous -- halfspaces with adversarial label noise with respect to the Gaussian distribution.

Super Non-singular Decompositions of Polynomials and their Application to Robustly Learning Low-degree PTFs

no code implementations31 Mar 2024 Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Sihan Liu, Nikos Zarifis

We study the efficient learnability of low-degree polynomial threshold functions (PTFs) in the presence of a constant fraction of adversarial corruptions.

PAC learning

Robust Sparse Estimation for Gaussians with Optimal Error under Huber Contamination

no code implementations15 Mar 2024 Ilias Diakonikolas, Daniel M. Kane, Sushrut Karmalkar, Ankit Pensia, Thanasis Pittas

Concretely, for Gaussian robust $k$-sparse mean estimation on $\mathbb{R}^d$ with corruption rate $\epsilon>0$, our algorithm has sample complexity $(k^2/\epsilon^2)\mathrm{polylog}(d/\epsilon)$, runs in sample polynomial time, and approximates the target mean within $\ell_2$-error $O(\epsilon)$.

Statistical Query Lower Bounds for Learning Truncated Gaussians

no code implementations4 Mar 2024 Ilias Diakonikolas, Daniel M. Kane, Thanasis Pittas, Nikos Zarifis

We study the problem of estimating the mean of an identity covariance Gaussian in the truncated setting, in the regime when the truncation set comes from a low-complexity family $\mathcal{C}$ of sets.

Agnostically Learning Multi-index Models with Queries

no code implementations27 Dec 2023 Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis

In contrast, algorithms that rely only on random examples inherently require $d^{\mathrm{poly}(1/\epsilon)}$ samples and runtime, even for the basic problem of agnostically learning a single ReLU or a halfspace.

Dimensionality Reduction

Clustering Mixtures of Bounded Covariance Distributions Under Optimal Separation

no code implementations19 Dec 2023 Ilias Diakonikolas, Daniel M. Kane, Jasper C. H. Lee, Thanasis Pittas

Furthermore, under a variant of the "no large sub-cluster'' condition from in prior work [BKK22], we show that our algorithm outputs an accurate clustering, not just a refinement, even for general-weight mixtures.

Clustering

Near-Optimal Algorithms for Gaussians with Huber Contamination: Mean Estimation and Linear Regression

no code implementations NeurIPS 2023 Ilias Diakonikolas, Daniel M. Kane, Ankit Pensia, Thanasis Pittas

We study the fundamental problems of Gaussian mean estimation and linear regression with Gaussian covariates in the presence of Huber contamination.

regression

Testing Closeness of Multivariate Distributions via Ramsey Theory

no code implementations22 Nov 2023 Ilias Diakonikolas, Daniel M. Kane, Sihan Liu

Our main result is the first closeness tester for this problem with {\em sub-learning} sample complexity in any fixed dimension and a nearly-matching sample complexity lower bound.

Online Robust Mean Estimation

no code implementations24 Oct 2023 Daniel M. Kane, Ilias Diakonikolas, Hanshen Xiao, Sihan Liu

We note that if the algorithm is allowed to wait until time $T$ to report its estimate, this reduces to the well-studied problem of robust mean estimation.

New Lower Bounds for Testing Monotonicity and Log Concavity of Distributions

no code implementations31 Jul 2023 Yuqian Cheng, Daniel M. Kane, Zhicheng Zheng

We develop a new technique for proving distribution testing lower bounds for properties defined by inequalities involving the bin probabilities of the distribution in question.

Efficiently Learning One-Hidden-Layer ReLU Networks via Schur Polynomials

no code implementations24 Jul 2023 Ilias Diakonikolas, Daniel M. Kane

Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/\epsilon)^{O(k)}$, where $\epsilon>0$ is the target accuracy.

PAC learning Tensor Decomposition

Information-Computation Tradeoffs for Learning Margin Halfspaces with Random Classification Noise

no code implementations28 Jun 2023 Ilias Diakonikolas, Jelena Diakonikolas, Daniel M. Kane, Puqian Wang, Nikos Zarifis

Our main result is a lower bound for Statistical Query (SQ) algorithms and low-degree polynomial tests suggesting that the quadratic dependence on $1/\epsilon$ in the sample complexity is inherent for computationally efficient algorithms.

PAC learning

SQ Lower Bounds for Learning Bounded Covariance GMMs

no code implementations22 Jun 2023 Ilias Diakonikolas, Daniel M. Kane, Thanasis Pittas, Nikos Zarifis

In the special case where the separation is on the order of $k^{1/2}$, we additionally obtain fine-grained SQ lower bounds with the correct exponent.

Nearly-Linear Time and Streaming Algorithms for Outlier-Robust PCA

no code implementations4 May 2023 Ilias Diakonikolas, Daniel M. Kane, Ankit Pensia, Thanasis Pittas

Our main contribution is to develop a nearly-linear time algorithm for robust PCA with near-optimal error guarantees.

Do PAC-Learners Learn the Marginal Distribution?

no code implementations13 Feb 2023 Max Hopkins, Daniel M. Kane, Shachar Lovett, Gaurav Mahajan

We study a foundational variant of Valiant and Vapnik and Chervonenkis' Probably Approximately Correct (PAC)-Learning in which the adversary is restricted to a known family of marginal distributions $\mathscr{P}$.

PAC learning

A Nearly Tight Bound for Fitting an Ellipsoid to Gaussian Random Points

no code implementations21 Dec 2022 Daniel M. Kane, Ilias Diakonikolas

We prove that for $c>0$ a sufficiently small universal constant that a random set of $c d^2/\log^4(d)$ independent Gaussian random points in $\mathbb{R}^d$ lie on a common ellipsoid with high probability.

A Strongly Polynomial Algorithm for Approximate Forster Transforms and its Application to Halfspace Learning

no code implementations6 Dec 2022 Ilias Diakonikolas, Christos Tzamos, Daniel M. Kane

By leveraging our strongly polynomial Forster algorithm, we obtain the first strongly polynomial time algorithm for {\em distribution-free} PAC learning of halfspaces.

PAC learning

Outlier-Robust Sparse Mean Estimation for Heavy-Tailed Distributions

no code implementations29 Nov 2022 Ilias Diakonikolas, Daniel M. Kane, Jasper C. H. Lee, Ankit Pensia

We study the fundamental task of outlier-robust mean estimation for heavy-tailed distributions in the presence of sparsity.

Gaussian Mean Testing Made Simple

no code implementations25 Oct 2022 Ilias Diakonikolas, Daniel M. Kane, Ankit Pensia

Here we give an extremely simple algorithm for Gaussian mean testing with a one-page analysis.

SQ Lower Bounds for Learning Single Neurons with Massart Noise

no code implementations18 Oct 2022 Ilias Diakonikolas, Daniel M. Kane, Lisheng Ren, Yuxin Sun

We study the problem of PAC learning a single neuron in the presence of Massart noise.

PAC learning

Near-Optimal Bounds for Testing Histogram Distributions

no code implementations14 Jul 2022 Clément L. Canonne, Ilias Diakonikolas, Daniel M. Kane, Sihan Liu

We investigate the problem of testing whether a discrete probability distribution over an ordered domain is a histogram on a specified number of bins.

Optimal SQ Lower Bounds for Robustly Learning Discrete Product Distributions and Ising Models

no code implementations9 Jun 2022 Ilias Diakonikolas, Daniel M. Kane, Yuxin Sun

We establish optimal Statistical Query (SQ) lower bounds for robustly learning certain families of discrete high-dimensional distributions.

Robust Sparse Mean Estimation via Sum of Squares

no code implementations7 Jun 2022 Ilias Diakonikolas, Daniel M. Kane, Sushrut Karmalkar, Ankit Pensia, Thanasis Pittas

In this work, we develop the first efficient algorithms for robust sparse mean estimation without a priori knowledge of the covariance.

Streaming Algorithms for High-Dimensional Robust Statistics

no code implementations26 Apr 2022 Ilias Diakonikolas, Daniel M. Kane, Ankit Pensia, Thanasis Pittas

In this work, we develop the first efficient streaming algorithms for high-dimensional robust statistics with near-optimal memory requirements (up to logarithmic factors).

Stochastic Optimization Vocal Bursts Intensity Prediction

Non-Gaussian Component Analysis via Lattice Basis Reduction

no code implementations16 Dec 2021 Ilias Diakonikolas, Daniel M. Kane

Non-Gaussian Component Analysis (NGCA) is the following distribution learning problem: Given i. i. d.

Realizable Learning is All You Need

no code implementations8 Nov 2021 Max Hopkins, Daniel M. Kane, Shachar Lovett, Gaurav Mahajan

The equivalence of realizable and agnostic learnability is a fundamental phenomenon in learning theory.

Learning Theory PAC learning

Outlier-Robust Sparse Estimation via Non-Convex Optimization

1 code implementation23 Sep 2021 Yu Cheng, Ilias Diakonikolas, Rong Ge, Shivam Gupta, Daniel M. Kane, Mahdi Soltanolkotabi

We explore the connection between outlier-robust high-dimensional statistics and non-convex optimization in the presence of sparsity constraints, with a focus on the fundamental tasks of robust sparse mean estimation and robust sparse PCA.

Learning General Halfspaces with General Massart Noise under the Gaussian Distribution

no code implementations19 Aug 2021 Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis

We study the general problem and establish the following: For $\eta <1/2$, we give a learning algorithm for general halfspaces with sample and computational complexity $d^{O_{\eta}(\log(1/\gamma))}\mathrm{poly}(1/\epsilon)$, where $\gamma =\max\{\epsilon, \min\{\mathbf{Pr}[f(\mathbf{x}) = 1], \mathbf{Pr}[f(\mathbf{x}) = -1]\} \}$ is the bias of the target halfspace $f$.

PAC learning

Forster Decomposition and Learning Halfspaces with Noise

no code implementations NeurIPS 2021 Ilias Diakonikolas, Daniel M. Kane, Christos Tzamos

A Forster transform is an operation that turns a distribution into one with good anti-concentration properties.

PAC learning

Clustering Mixture Models in Almost-Linear Time via List-Decodable Mean Estimation

no code implementations16 Jun 2021 Ilias Diakonikolas, Daniel M. Kane, Daniel Kongsgaard, Jerry Li, Kevin Tian

We leverage this result, together with additional techniques, to obtain the first almost-linear time algorithms for clustering mixtures of $k$ separated well-behaved distributions, nearly-matching the statistical guarantees of spectral methods.

Clustering

Outlier-Robust Learning of Ising Models Under Dobrushin's Condition

no code implementations3 Feb 2021 Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart, Yuxin Sun

We study the problem of learning Ising models satisfying Dobrushin's condition in the outlier-robust setting where a constant fraction of the samples are adversarially corrupted.

The Sample Complexity of Robust Covariance Testing

no code implementations31 Dec 2020 Ilias Diakonikolas, Daniel M. Kane

This lower bound is best possible, as $O(d^2)$ samples suffice to even robustly {\em learn} the covariance.

Near-Optimal Statistical Query Hardness of Learning Halfspaces with Massart Noise

no code implementations17 Dec 2020 Ilias Diakonikolas, Daniel M. Kane

The best known $\mathrm{poly}(d, 1/\epsilon)$-time algorithms for this problem achieve error of $\eta+\epsilon$, which can be far from the optimal bound of $\mathrm{OPT}+\epsilon$, where $\mathrm{OPT} = \mathbf{E}_{x \sim D_x} [\eta(x)]$.

Learning Theory PAC learning

Small Covers for Near-Zero Sets of Polynomials and Learning Latent Variable Models

no code implementations14 Dec 2020 Ilias Diakonikolas, Daniel M. Kane

Our result is constructive yielding an algorithm to compute such an $\epsilon$-cover that runs in time $\mathrm{poly}(M)$.

PAC learning

Robustly Learning Mixtures of $k$ Arbitrary Gaussians

no code implementations3 Dec 2020 Ainesh Bakshi, Ilias Diakonikolas, He Jia, Daniel M. Kane, Pravesh K. Kothari, Santosh S. Vempala

We give a polynomial-time algorithm for the problem of robustly estimating a mixture of $k$ arbitrary Gaussians in $\mathbb{R}^d$, for any fixed $k$, in the presence of a constant fraction of arbitrary corruptions.

Clustering Tensor Decomposition

List-Decodable Mean Estimation in Nearly-PCA Time

no code implementations NeurIPS 2021 Ilias Diakonikolas, Daniel M. Kane, Daniel Kongsgaard, Jerry Li, Kevin Tian

Our algorithm runs in time $\widetilde{O}(ndk)$ for all $k = O(\sqrt{d}) \cup \Omega(d)$, where $n$ is the size of the dataset.

Clustering

Optimal Testing of Discrete Distributions with High Probability

no code implementations14 Sep 2020 Ilias Diakonikolas, Themis Gouleakis, Daniel M. Kane, John Peebles, Eric Price

To illustrate the generality of our methods, we give optimal algorithms for testing collections of distributions and testing closeness with unequal sized samples.

Vocal Bursts Intensity Prediction

The Complexity of Adversarially Robust Proper Learning of Halfspaces with Agnostic Noise

no code implementations NeurIPS 2020 Ilias Diakonikolas, Daniel M. Kane, Pasin Manurangsi

We study the computational complexity of adversarially robust proper learning of halfspaces in the distribution-independent agnostic PAC model, with a focus on $L_p$ perturbations.

Outlier Robust Mean Estimation with Subgaussian Rates via Stability

no code implementations NeurIPS 2020 Ilias Diakonikolas, Daniel M. Kane, Ankit Pensia

We study the problem of outlier robust high-dimensional mean estimation under a finite covariance assumption, and more broadly under finite low-degree moment assumptions.

Robust Learning of Mixtures of Gaussians

no code implementations12 Jul 2020 Daniel M. Kane

We resolve one of the major outstanding problems in robust statistics.

Algorithms and SQ Lower Bounds for PAC Learning One-Hidden-Layer ReLU Networks

no code implementations22 Jun 2020 Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Nikos Zarifis

For the case of positive coefficients, we give the first polynomial-time algorithm for this learning problem for $k$ up to $\tilde{O}(\sqrt{\log d})$.

Open-Ended Question Answering PAC learning

List-Decodable Mean Estimation via Iterative Multi-Filtering

no code implementations NeurIPS 2020 Ilias Diakonikolas, Daniel M. Kane, Daniel Kongsgaard

We study the problem of {\em list-decodable mean estimation} for bounded covariance distributions.

Point Location and Active Learning: Learning Halfspaces Almost Optimally

no code implementations23 Apr 2020 Max Hopkins, Daniel M. Kane, Shachar Lovett, Gaurav Mahajan

Given a finite set $X \subset \mathbb{R}^d$ and a binary linear classifier $c: \mathbb{R}^d \to \{0, 1\}$, how many queries of the form $c(x)$ are required to learn the label of every point in $X$?

Active Learning Position

The Power of Comparisons for Actively Learning Linear Classifiers

no code implementations NeurIPS 2020 Max Hopkins, Daniel M. Kane, Shachar Lovett

While previous results show that active learning performs no better than its supervised alternative for important concept classes such as linear separators, we show that by adding weak distributional assumptions and allowing comparison queries, active learning requires exponentially fewer samples.

Active Learning PAC learning

Communication and Memory Efficient Testing of Discrete Distributions

no code implementations11 Jun 2019 Ilias Diakonikolas, Themis Gouleakis, Daniel M. Kane, Sankeerth Rao

We study distribution testing with communication and memory constraints in the following computational models: (1) The {\em one-pass streaming model} where the goal is to minimize the sample complexity of the protocol subject to a memory constraint, and (2) A {\em distributed model} where the data samples reside at multiple machines and the goal is to minimize the communication cost of the protocol.

Two-sample testing

Learning Ising Models with Independent Failures

no code implementations13 Feb 2019 Surbhi Goel, Daniel M. Kane, Adam R. Klivans

We give the first efficient algorithm for learning the structure of an Ising model that tolerates independent failures; that is, each entry of the observed sample is missing with some unknown probability p. Our algorithm matches the essentially optimal runtime and sample complexity bounds of recent work for learning Ising models due to Klivans and Meka (2017).

Degree-$d$ Chow Parameters Robustly Determine Degree-$d$ PTFs (and Algorithmic Applications)

no code implementations7 Nov 2018 Ilias Diakonikolas, Daniel M. Kane

Our robust identifiability result gives the following algorithmic applications: First, we show that Boolean degree-$d$ PTFs can be efficiently approximately reconstructed from approximations to their degree-$d$ Chow parameters.

Testing Identity of Multidimensional Histograms

no code implementations10 Apr 2018 Ilias Diakonikolas, Daniel M. Kane, John Peebles

We give the first identity tester for this problem with {\em sub-learning} sample complexity in any fixed dimension and a nearly-matching sample complexity lower bound.

Two-sample testing

Sever: A Robust Meta-Algorithm for Stochastic Optimization

1 code implementation7 Mar 2018 Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Jacob Steinhardt, Alistair Stewart

In high dimensions, most machine learning methods are brittle to even a small fraction of structured outliers.

Stochastic Optimization

List-Decodable Robust Mean Estimation and Learning Mixtures of Spherical Gaussians

no code implementations20 Nov 2017 Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart

We give a learning algorithm for mixtures of spherical Gaussians that succeeds under significantly weaker separation assumptions compared to prior work.

On Communication Complexity of Classification Problems

no code implementations16 Nov 2017 Daniel M. Kane, Roi Livni, Shay Moran, Amir Yehudayoff

To naturally fit into the framework of learning theory, the players can send each other examples (as well as bits) where each example/bit costs one unit of communication.

BIG-bench Machine Learning Classification +2

Sharp Bounds for Generalized Uniformity Testing

no code implementations NeurIPS 2018 Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart

We study the problem of generalized uniformity testing \cite{BC17} of a discrete probability distribution: Given samples from a probability distribution $p$ over an {\em unknown} discrete domain $\mathbf{\Omega}$, we want to distinguish, with probability at least $2/3$, between the case that $p$ is uniform on some {\em subset} of $\mathbf{\Omega}$ versus $\epsilon$-far, in total variation distance, from any such uniform distribution.

Learning Geometric Concepts with Nasty Noise

no code implementations5 Jul 2017 Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart

We give the first polynomial-time PAC learning algorithms for these concept classes with dimension-independent error guarantees in the presence of nasty noise under the Gaussian distribution.

LEMMA Outlier Detection +1

Near-optimal linear decision trees for k-SUM and related problems

no code implementations4 May 2017 Daniel M. Kane, Shachar Lovett, Shay Moran

We construct near optimal linear decision trees for a variety of decision problems in combinatorics and discrete geometry.

2k

Robustly Learning a Gaussian: Getting Optimal Error, Efficiently

no code implementations12 Apr 2017 Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Ankur Moitra, Alistair Stewart

We give robust estimators that achieve estimation error $O(\varepsilon)$ in the total variation distance, which is optimal up to a universal constant that is independent of the dimension.

Active classification with comparison queries

no code implementations11 Apr 2017 Daniel M. Kane, Shachar Lovett, Shay Moran, Jiapeng Zhang

We identify a combinatorial dimension, called the \emph{inference dimension}, that captures the query complexity when each additional query is determined by $O(1)$ examples (such as comparison queries, each of which is determined by the two compared examples).

Active Learning Classification +1

Near-Optimal Closeness Testing of Discrete Histogram Distributions

no code implementations6 Mar 2017 Ilias Diakonikolas, Daniel M. Kane, Vladimir Nikishkin

Given a set of samples from two $k$-histogram distributions $p, q$ over $[n]$, we want to distinguish (with high probability) between the cases that $p = q$ and $\|p-q\|_1 \geq \epsilon$.

Being Robust (in High Dimensions) Can Be Practical

2 code implementations ICML 2017 Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Ankur Moitra, Alistair Stewart

Robust estimation is much more challenging in high dimensions than it is in one dimension: Most techniques either lead to intractable optimization problems or estimators that can tolerate only a tiny fraction of errors.

Vocal Bursts Intensity Prediction

Statistical Query Lower Bounds for Robust Estimation of High-dimensional Gaussians and Gaussian Mixtures

no code implementations10 Nov 2016 Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart

For each of these problems, we show a {\em super-polynomial gap} between the (information-theoretic) sample complexity and the computational complexity of {\em any} Statistical Query algorithm for the problem.

Efficient Robust Proper Learning of Log-concave Distributions

no code implementations9 Jun 2016 Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart

We study the {\em robust proper learning} of univariate log-concave distributions (over continuous and discrete domains).

Learning Multivariate Log-concave Distributions

no code implementations26 May 2016 Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart

Prior to our work, no upper bound on the sample complexity of this learning problem was known for the case of $d>3$.

Super-Linear Gate and Super-Quadratic Wire Lower Bounds for Depth-Two and Depth-Three Threshold Circuits

no code implementations24 Nov 2015 Daniel M. Kane, Ryan Williams

$\bullet$ We give tight average-case (gate and wire) complexity results for computing PARITY with depth-two threshold circuits; the answer turns out to be the same as for depth-two majority circuits.

LEMMA

Properly Learning Poisson Binomial Distributions in Almost Polynomial Time

no code implementations12 Nov 2015 Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart

Given $\widetilde{O}(1/\epsilon^2)$ samples from an unknown PBD $\mathbf{p}$, our algorithm runs in time $(1/\epsilon)^{O(\log \log (1/\epsilon))}$, and outputs a hypothesis PBD that is $\epsilon$-close to $\mathbf{p}$ in total variation distance.

The Fourier Transform of Poisson Multinomial Distributions and its Algorithmic Applications

no code implementations11 Nov 2015 Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart

An $(n, k)$-Poisson Multinomial Distribution (PMD) is a random variable of the form $X = \sum_{i=1}^n X_i$, where the $X_i$'s are independent random vectors supported on the set of standard basis vectors in $\mathbb{R}^k.$ In this paper, we obtain a refined structural understanding of PMDs by analyzing their Fourier transform.

Learning Theory

Optimal Learning via the Fourier Transform for Sums of Independent Integer Random Variables

no code implementations4 May 2015 Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart

As one of our main structural contributions, we give an efficient algorithm to construct a sparse {\em proper} $\epsilon$-cover for ${\cal S}_{n, k},$ in total variation distance.

Fast Moment Estimation in Data Streams in Optimal Space

no code implementations23 Jul 2010 Daniel M. Kane, Jelani Nelson, Ely Porat, David P. Woodruff

We give a space-optimal algorithm with update time O(log^2(1/eps)loglog(1/eps)) for (1+eps)-approximating the pth frequency moment, 0 < p < 2, of a length-n vector updated in a data stream.

Data Structures and Algorithms

Cannot find the paper you are looking for? You can Submit a new open access paper.