Search Results for author: Ilias Diakonikolas

Found 78 papers, 6 papers with code

Streaming Algorithms for High-Dimensional Robust Statistics

no code implementations26 Apr 2022 Ilias Diakonikolas, Daniel M. Kane, Ankit Pensia, Thanasis Pittas

In this work, we develop the first efficient streaming algorithms for high-dimensional robust statistics with near-optimal memory requirements (up to logarithmic factors).

Stochastic Optimization

Non-Gaussian Component Analysis via Lattice Basis Reduction

no code implementations16 Dec 2021 Ilias Diakonikolas, Daniel M. Kane

Non-Gaussian Component Analysis (NGCA) is the following distribution learning problem: Given i. i. d.

Outlier-Robust Sparse Estimation via Non-Convex Optimization

no code implementations23 Sep 2021 Yu Cheng, Ilias Diakonikolas, Daniel M. Kane, Rong Ge, Shivam Gupta, Mahdi Soltanolkotabi

We explore the connection between outlier-robust high-dimensional statistics and non-convex optimization in the presence of sparsity constraints, with a focus on the fundamental tasks of robust sparse mean estimation and robust sparse PCA.

ReLU Regression with Massart Noise

no code implementations NeurIPS 2021 Ilias Diakonikolas, Jongho Park, Christos Tzamos

This supervised learning task is efficiently solvable in the realizable setting, but is known to be computationally hard with adversarial label noise.

Learning General Halfspaces with General Massart Noise under the Gaussian Distribution

no code implementations19 Aug 2021 Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis

We study the general problem and establish the following: For $\eta <1/2$, we give a learning algorithm for general halfspaces with sample and computational complexity $d^{O_{\eta}(\log(1/\gamma))}\mathrm{poly}(1/\epsilon)$, where $\gamma =\max\{\epsilon, \min\{\mathbf{Pr}[f(\mathbf{x}) = 1], \mathbf{Pr}[f(\mathbf{x}) = -1]\} \}$ is the bias of the target halfspace $f$.

Forster Decomposition and Learning Halfspaces with Noise

no code implementations NeurIPS 2021 Ilias Diakonikolas, Daniel M. Kane, Christos Tzamos

A Forster transform is an operation that turns a distribution into one with good anti-concentration properties.

Statistical Query Lower Bounds for List-Decodable Linear Regression

no code implementations NeurIPS 2021 Ilias Diakonikolas, Daniel M. Kane, Ankit Pensia, Thanasis Pittas, Alistair Stewart

We study the problem of list-decodable linear regression, where an adversary can corrupt a majority of the examples.

Clustering Mixture Models in Almost-Linear Time via List-Decodable Mean Estimation

no code implementations16 Jun 2021 Ilias Diakonikolas, Daniel M. Kane, Daniel Kongsgaard, Jerry Li, Kevin Tian

We leverage this result, together with additional techniques, to obtain the first almost-linear time algorithms for clustering mixtures of $k$ separated well-behaved distributions, nearly-matching the statistical guarantees of spectral methods.

Boosting in the Presence of Massart Noise

no code implementations14 Jun 2021 Ilias Diakonikolas, Russell Impagliazzo, Daniel Kane, Rex Lei, Jessica Sorrell, Christos Tzamos

Our upper and lower bounds characterize the complexity of boosting in the distribution-independent PAC model with Massart noise.

Agnostic Proper Learning of Halfspaces under Gaussian Marginals

no code implementations10 Feb 2021 Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis

We study the problem of agnostically learning halfspaces under the Gaussian distribution.

Outlier-Robust Learning of Ising Models Under Dobrushin's Condition

no code implementations3 Feb 2021 Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart, Yuxin Sun

We study the problem of learning Ising models satisfying Dobrushin's condition in the outlier-robust setting where a constant fraction of the samples are adversarially corrupted.

The Sample Complexity of Robust Covariance Testing

no code implementations31 Dec 2020 Ilias Diakonikolas, Daniel M. Kane

This lower bound is best possible, as $O(d^2)$ samples suffice to even robustly {\em learn} the covariance.

Near-Optimal Statistical Query Hardness of Learning Halfspaces with Massart Noise

no code implementations17 Dec 2020 Ilias Diakonikolas, Daniel M. Kane

The best known $\mathrm{poly}(d, 1/\epsilon)$-time algorithms for this problem achieve error of $\eta+\epsilon$, which can be far from the optimal bound of $\mathrm{OPT}+\epsilon$, where $\mathrm{OPT} = \mathbf{E}_{x \sim D_x} [\eta(x)]$.

Learning Theory

Small Covers for Near-Zero Sets of Polynomials and Learning Latent Variable Models

no code implementations14 Dec 2020 Ilias Diakonikolas, Daniel M. Kane

Our result is constructive yielding an algorithm to compute such an $\epsilon$-cover that runs in time $\mathrm{poly}(M)$.

Robustly Learning Mixtures of $k$ Arbitrary Gaussians

no code implementations3 Dec 2020 Ainesh Bakshi, Ilias Diakonikolas, He Jia, Daniel M. Kane, Pravesh K. Kothari, Santosh S. Vempala

We give a polynomial-time algorithm for the problem of robustly estimating a mixture of $k$ arbitrary Gaussians in $\mathbb{R}^d$, for any fixed $k$, in the presence of a constant fraction of arbitrary corruptions.

Tensor Decomposition

List-Decodable Mean Estimation in Nearly-PCA Time

no code implementations NeurIPS 2021 Ilias Diakonikolas, Daniel M. Kane, Daniel Kongsgaard, Jerry Li, Kevin Tian

Our algorithm runs in time $\widetilde{O}(ndk)$ for all $k = O(\sqrt{d}) \cup \Omega(d)$, where $n$ is the size of the dataset.

Optimal Testing of Discrete Distributions with High Probability

no code implementations14 Sep 2020 Ilias Diakonikolas, Themis Gouleakis, Daniel M. Kane, John Peebles, Eric Price

To illustrate the generality of our methods, we give optimal algorithms for testing collections of distributions and testing closeness with unequal sized samples.

Outlier Robust Mean Estimation with Subgaussian Rates via Stability

no code implementations NeurIPS 2020 Ilias Diakonikolas, Daniel M. Kane, Ankit Pensia

We study the problem of outlier robust high-dimensional mean estimation under a finite covariance assumption, and more broadly under finite low-degree moment assumptions.

The Complexity of Adversarially Robust Proper Learning of Halfspaces with Agnostic Noise

no code implementations NeurIPS 2020 Ilias Diakonikolas, Daniel M. Kane, Pasin Manurangsi

We study the computational complexity of adversarially robust proper learning of halfspaces in the distribution-independent agnostic PAC model, with a focus on $L_p$ perturbations.

Algorithms and SQ Lower Bounds for PAC Learning One-Hidden-Layer ReLU Networks

no code implementations22 Jun 2020 Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Nikos Zarifis

For the case of positive coefficients, we give the first polynomial-time algorithm for this learning problem for $k$ up to $\tilde{O}(\sqrt{\log d})$.

List-Decodable Mean Estimation via Iterative Multi-Filtering

no code implementations NeurIPS 2020 Ilias Diakonikolas, Daniel M. Kane, Daniel Kongsgaard

We study the problem of {\em list-decodable mean estimation} for bounded covariance distributions.

Learning Halfspaces with Tsybakov Noise

no code implementations11 Jun 2020 Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis

In the Tsybakov noise model, each label is independently flipped with some probability which is controlled by an adversary.

Non-Convex SGD Learns Halfspaces with Adversarial Label Noise

no code implementations NeurIPS 2020 Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis

We study the problem of agnostically learning homogeneous halfspaces in the distribution-specific PAC model.

Approximation Schemes for ReLU Regression

no code implementations26 May 2020 Ilias Diakonikolas, Surbhi Goel, Sushrut Karmalkar, Adam R. Klivans, Mahdi Soltanolkotabi

We consider the fundamental problem of ReLU regression, where the goal is to output the best fitting ReLU with respect to square loss given access to draws from some unknown distribution.

Efficiently Learning Adversarially Robust Halfspaces with Noise

no code implementations ICML 2020 Omar Montasser, Surbhi Goel, Ilias Diakonikolas, Nathan Srebro

We study the problem of learning adversarially robust halfspaces in the distribution-independent setting.

Robustly Learning any Clusterable Mixture of Gaussians

no code implementations13 May 2020 Ilias Diakonikolas, Samuel B. Hopkins, Daniel Kane, Sushrut Karmalkar

The key ingredients of this proof are a novel use of SoS-certifiable anti-concentration and a new characterization of pairs of Gaussians with small (dimension-independent) overlap in terms of their parameter distance.

High-Dimensional Robust Mean Estimation via Gradient Descent

no code implementations ICML 2020 Yu Cheng, Ilias Diakonikolas, Rong Ge, Mahdi Soltanolkotabi

We study the problem of high-dimensional robust mean estimation in the presence of a constant fraction of adversarial outliers.

Efficient Algorithms for Multidimensional Segmented Regression

1 code implementation24 Mar 2020 Ilias Diakonikolas, Jerry Li, Anastasia Voloshinov

We study the fundamental problem of fixed design {\em multidimensional segmented regression}: Given noisy samples from a function $f$, promised to be piecewise linear on an unknown set of $k$ rectangles, we want to recover $f$ up to a desired accuracy in mean-squared error.

Learning Halfspaces with Massart Noise Under Structured Distributions

no code implementations13 Feb 2020 Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis

We study the problem of learning halfspaces with Massart noise in the distribution-specific PAC model.

Private Testing of Distributions via Sample Permutations

no code implementations NeurIPS 2019 Maryam Aliakbarpour, Ilias Diakonikolas, Daniel Kane, Ronitt Rubinfeld

In this paper, we use the framework of property testing to design algorithms to test the properties of the distribution that the data is drawn from with respect to differential privacy.

Recent Advances in Algorithmic High-Dimensional Robust Statistics

no code implementations14 Nov 2019 Ilias Diakonikolas, Daniel M. Kane

Learning in the presence of outliers is a fundamental problem in statistics.

Distribution-Independent PAC Learning of Halfspaces with Massart Noise

no code implementations NeurIPS 2019 Ilias Diakonikolas, Themis Gouleakis, Christos Tzamos

The goal is to find a hypothesis $h$ that minimizes the misclassification error $\mathbf{Pr}_{(\mathbf{x}, y) \sim \mathcal{D}} \left[ h(\mathbf{x}) \neq y \right]$.

Communication and Memory Efficient Testing of Discrete Distributions

no code implementations11 Jun 2019 Ilias Diakonikolas, Themis Gouleakis, Daniel M. Kane, Sankeerth Rao

We study distribution testing with communication and memory constraints in the following computational models: (1) The {\em one-pass streaming model} where the goal is to minimize the sample complexity of the protocol subject to a memory constraint, and (2) A {\em distributed model} where the data samples reside at multiple machines and the goal is to minimize the communication cost of the protocol.

Two-sample testing

Faster Algorithms for High-Dimensional Robust Covariance Estimation

no code implementations11 Jun 2019 Yu Cheng, Ilias Diakonikolas, Rong Ge, David Woodruff

We study the problem of estimating the covariance matrix of a high-dimensional distribution when a small constant fraction of the samples can be arbitrarily corrupted.

Equipping Experts/Bandits with Long-term Memory

no code implementations NeurIPS 2019 Kai Zheng, Haipeng Luo, Ilias Diakonikolas, Li-Wei Wang

We propose the first reduction-based approach to obtaining long-term memory guarantees for online learning in the sense of Bousquet and Warmuth, 2002, by reducing the problem to achieving typical switching regret.

Multi-Armed Bandits online learning

On the Complexity of the Inverse Semivalue Problem for Weighted Voting Games

no code implementations31 Dec 2018 Ilias Diakonikolas, Chrystalla Pavlou

In this work, we study the computational complexity of the inverse problem when the power index belongs to the class of semivalues.

Testing for Families of Distributions via the Fourier Transform

no code implementations NeurIPS 2018 Alistair Stewart, Ilias Diakonikolas, Clement Canonne

We study the general problem of testing whether an unknown discrete distribution belongs to a specified family of distributions.

Two-sample testing

High-Dimensional Robust Mean Estimation in Nearly-Linear Time

no code implementations23 Nov 2018 Yu Cheng, Ilias Diakonikolas, Rong Ge

We study the fundamental problem of high-dimensional mean estimation in a robust model where a constant fraction of the samples are adversarially corrupted.

Degree-$d$ Chow Parameters Robustly Determine Degree-$d$ PTFs (and Algorithmic Applications)

no code implementations7 Nov 2018 Ilias Diakonikolas, Daniel M. Kane

Our robust identifiability result gives the following algorithmic applications: First, we show that Boolean degree-$d$ PTFs can be efficiently approximately reconstructed from approximations to their degree-$d$ Chow parameters.

Differentially Private Identity and Equivalence Testing of Discrete Distributions

no code implementations ICML 2018 Maryam Aliakbarpour, Ilias Diakonikolas, Ronitt Rubinfeld

Our theoretical results significantly improve over the best known algorithms for identity testing, and are the first results for private equivalence testing.

Efficient Algorithms and Lower Bounds for Robust Linear Regression

no code implementations31 May 2018 Ilias Diakonikolas, Weihao Kong, Alistair Stewart

An error of $\Omega (\epsilon \sigma)$ is information-theoretically necessary, even with infinite sample size.

Testing Identity of Multidimensional Histograms

no code implementations10 Apr 2018 Ilias Diakonikolas, Daniel M. Kane, John Peebles

We give the first identity tester for this problem with {\em sub-learning} sample complexity in any fixed dimension and a nearly-matching sample complexity lower bound.

Two-sample testing

Sever: A Robust Meta-Algorithm for Stochastic Optimization

1 code implementation7 Mar 2018 Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Jacob Steinhardt, Alistair Stewart

In high dimensions, most machine learning methods are brittle to even a small fraction of structured outliers.

Stochastic Optimization

Fast and Sample Near-Optimal Algorithms for Learning Multidimensional Histograms

no code implementations23 Feb 2018 Ilias Diakonikolas, Jerry Li, Ludwig Schmidt

We give an algorithm for this learning problem that uses $n = \tilde{O}_d(k/\epsilon^2)$ samples and runs in time $\tilde{O}_d(n)$.

Communication-Efficient Distributed Learning of Discrete Distributions

no code implementations NeurIPS 2017 Ilias Diakonikolas, Elena Grigorescu, Jerry Li, Abhiram Natarajan, Krzysztof Onak, Ludwig Schmidt

For the case of structured distributions, such as k-histograms and monotone distributions, we design distributed learning algorithms that achieve significantly better communication guarantees than the naive ones, and obtain tight upper and lower bounds in several regimes.

Density Estimation

List-Decodable Robust Mean Estimation and Learning Mixtures of Spherical Gaussians

no code implementations20 Nov 2017 Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart

We give a learning algorithm for mixtures of spherical Gaussians that succeeds under significantly weaker separation assumptions compared to prior work.

Sharp Bounds for Generalized Uniformity Testing

no code implementations NeurIPS 2018 Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart

We study the problem of generalized uniformity testing \cite{BC17} of a discrete probability distribution: Given samples from a probability distribution $p$ over an {\em unknown} discrete domain $\mathbf{\Omega}$, we want to distinguish, with probability at least $2/3$, between the case that $p$ is uniform on some {\em subset} of $\mathbf{\Omega}$ versus $\epsilon$-far, in total variation distance, from any such uniform distribution.

Optimal Identity Testing with High Probability

no code implementations9 Aug 2017 Ilias Diakonikolas, Themis Gouleakis, John Peebles, Eric Price

Our new upper and lower bounds show that the optimal sample complexity of identity testing is \[ \Theta\left( \frac{1}{\epsilon^2}\left(\sqrt{n \log(1/\delta)} + \log(1/\delta) \right)\right) \] for any $n, \varepsilon$, and $\delta$.

Differentially Private Identity and Closeness Testing of Discrete Distributions

no code implementations18 Jul 2017 Maryam Aliakbarpour, Ilias Diakonikolas, Ronitt Rubinfeld

We investigate the problems of identity and closeness testing over a discrete population from random samples.

Learning Geometric Concepts with Nasty Noise

no code implementations5 Jul 2017 Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart

We give the first polynomial-time PAC learning algorithms for these concept classes with dimension-independent error guarantees in the presence of nasty noise under the Gaussian distribution.

Outlier Detection

Robustly Learning a Gaussian: Getting Optimal Error, Efficiently

no code implementations12 Apr 2017 Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Ankur Moitra, Alistair Stewart

We give robust estimators that achieve estimation error $O(\varepsilon)$ in the total variation distance, which is optimal up to a universal constant that is independent of the dimension.

Near-Optimal Closeness Testing of Discrete Histogram Distributions

no code implementations6 Mar 2017 Ilias Diakonikolas, Daniel M. Kane, Vladimir Nikishkin

Given a set of samples from two $k$-histogram distributions $p, q$ over $[n]$, we want to distinguish (with high probability) between the cases that $p = q$ and $\|p-q\|_1 \geq \epsilon$.

Being Robust (in High Dimensions) Can Be Practical

2 code implementations ICML 2017 Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Ankur Moitra, Alistair Stewart

Robust estimation is much more challenging in high dimensions than it is in one dimension: Most techniques either lead to intractable optimization problems or estimators that can tolerate only a tiny fraction of errors.

Testing Bayesian Networks

no code implementations9 Dec 2016 Clement Canonne, Ilias Diakonikolas, Daniel Kane, Alistair Stewart

This work initiates a systematic investigation of testing high-dimensional structured distributions by focusing on testing Bayesian networks -- the prototypical family of directed graphical models.

Collision-based Testers are Optimal for Uniformity and Closeness

no code implementations11 Nov 2016 Ilias Diakonikolas, Themis Gouleakis, John Peebles, Eric Price

We study the fundamental problems of (i) uniformity testing of a discrete distribution, and (ii) closeness testing between two discrete distributions with bounded $\ell_2$-norm.

Statistical Query Lower Bounds for Robust Estimation of High-dimensional Gaussians and Gaussian Mixtures

no code implementations10 Nov 2016 Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart

For each of these problems, we show a {\em super-polynomial gap} between the (information-theoretic) sample complexity and the computational complexity of {\em any} Statistical Query algorithm for the problem.

Fast Algorithms for Segmented Regression

no code implementations14 Jul 2016 Jayadev Acharya, Ilias Diakonikolas, Jerry Li, Ludwig Schmidt

We study the fixed design segmented regression problem: Given noisy samples from a piecewise linear function $f$, we want to recover $f$ up to a desired accuracy in mean-squared error.

Robust Learning of Fixed-Structure Bayesian Networks

1 code implementation NeurIPS 2018 Yu Cheng, Ilias Diakonikolas, Daniel Kane, Alistair Stewart

We investigate the problem of learning Bayesian networks in a robust model where an $\epsilon$-fraction of the samples are adversarially corrupted.

Efficient Robust Proper Learning of Log-concave Distributions

no code implementations9 Jun 2016 Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart

We study the {\em robust proper learning} of univariate log-concave distributions (over continuous and discrete domains).

Learning Multivariate Log-concave Distributions

no code implementations26 May 2016 Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart

Prior to our work, no upper bound on the sample complexity of this learning problem was known for the case of $d>3$.

Robust Estimators in High Dimensions without the Computational Intractability

2 code implementations21 Apr 2016 Ilias Diakonikolas, Gautam Kamath, Daniel Kane, Jerry Li, Ankur Moitra, Alistair Stewart

We study high-dimensional distribution learning in an agnostic setting where an adversary is allowed to arbitrarily corrupt an $\varepsilon$-fraction of the samples.

Differentially Private Learning of Structured Discrete Distributions

no code implementations NeurIPS 2015 Ilias Diakonikolas, Moritz Hardt, Ludwig Schmidt

We investigate the problem of learning an unknown probability distribution over a discrete population from random samples.

Properly Learning Poisson Binomial Distributions in Almost Polynomial Time

no code implementations12 Nov 2015 Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart

Given $\widetilde{O}(1/\epsilon^2)$ samples from an unknown PBD $\mathbf{p}$, our algorithm runs in time $(1/\epsilon)^{O(\log \log (1/\epsilon))}$, and outputs a hypothesis PBD that is $\epsilon$-close to $\mathbf{p}$ in total variation distance.

The Fourier Transform of Poisson Multinomial Distributions and its Algorithmic Applications

no code implementations11 Nov 2015 Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart

An $(n, k)$-Poisson Multinomial Distribution (PMD) is a random variable of the form $X = \sum_{i=1}^n X_i$, where the $X_i$'s are independent random vectors supported on the set of standard basis vectors in $\mathbb{R}^k.$ In this paper, we obtain a refined structural understanding of PMDs by analyzing their Fourier transform.

Learning Theory

Sample-Optimal Density Estimation in Nearly-Linear Time

no code implementations1 Jun 2015 Jayadev Acharya, Ilias Diakonikolas, Jerry Li, Ludwig Schmidt

Let $f$ be the density function of an arbitrary univariate distribution, and suppose that $f$ is $\mathrm{OPT}$-close in $L_1$-distance to an unknown piecewise polynomial function with $t$ interval pieces and degree $d$.

Density Estimation

Optimal Learning via the Fourier Transform for Sums of Independent Integer Random Variables

no code implementations4 May 2015 Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart

As one of our main structural contributions, we give an efficient algorithm to construct a sparse {\em proper} $\epsilon$-cover for ${\cal S}_{n, k},$ in total variation distance.

Near-Optimal Density Estimation in Near-Linear Time Using Variable-Width Histograms

no code implementations NeurIPS 2014 Siu-On Chan, Ilias Diakonikolas, Rocco A. Servedio, Xiaorui Sun

The "approximation factor" $C$ in our result is inherent in the problem, as we prove that no algorithm with sample size bounded in terms of $k$ and $\epsilon$ can achieve $C<2$ regardless of what kind of hypothesis distribution it uses.

Density Estimation

Efficient Density Estimation via Piecewise Polynomial Approximation

no code implementations14 May 2013 Siu-On Chan, Ilias Diakonikolas, Rocco A. Servedio, Xiaorui Sun

We give an algorithm that draws $\tilde{O}(t\new{(d+1)}/\eps^2)$ samples from $p$, runs in time $\poly(t, d, 1/\eps)$, and with high probability outputs a piecewise polynomial hypothesis distribution $h$ that is $(O(\tau)+\eps)$-close (in total variation distance) to $p$.

Density Estimation

Inverse problems in approximate uniform generation

no code implementations7 Nov 2012 Anindya De, Ilias Diakonikolas, Rocco A. Servedio

In such an inverse problem, the algorithm is given uniform random satisfying assignments of an unknown function $f$ belonging to a class $\C$ of Boolean functions, and the goal is to output a probability distribution $D$ which is $\epsilon$-close, in total variation distance, to the uniform distribution over $f^{-1}(1)$.

Learning Poisson Binomial Distributions

no code implementations13 Jul 2011 Constantinos Daskalakis, Ilias Diakonikolas, Rocco A. Servedio

Our second main result is a {\em proper} learning algorithm that learns to $\eps$-accuracy using $\tilde{O}(1/\eps^2)$ samples, and runs in time $(1/\eps)^{\poly (\log (1/\eps))} \cdot \log n$.

Learning $k$-Modal Distributions via Testing

no code implementations13 Jul 2011 Constantinos Daskalakis, Ilias Diakonikolas, Rocco A. Servedio

The learning algorithm is given access to independent samples drawn from an unknown $k$-modal distribution $p$, and it must output a hypothesis distribution $\widehat{p}$ such that with high probability the total variation distance between $p$ and $\widehat{p}$ is at most $\epsilon.$ Our main goal is to obtain \emph{computationally efficient} algorithms for this problem that use (close to) an information-theoretically optimal number of samples.

Density Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.