no code implementations • NeurIPS 2021 • Anindya De, Sanjeev Khanna, Huan Li, MohammadHesam NikpeySalekde
We study the problem of minimizing a convex function given by a zeroth order oracle that is possibly corrupted by {\em outlier noise}.
no code implementations • 22 Dec 2020 • Anindya De, Shivam Nadimpalli, Rocco A. Servedio
Most correlation inequalities for high-dimensional functions in the literature, such as the Fortuin-Kasteleyn-Ginibre (FKG) inequality and the celebrated Gaussian Correlation Inequality of Royen, are qualitative statements which establish that any two functions of a certain type have non-negative correlation.
Probability Computational Complexity Combinatorics
no code implementations • 6 Oct 2020 • Aidao Chen, Anindya De, Aravindan Vijayaraghavan
We study the problem of learning a mixture of two subspaces over $\mathbb{F}_2^n$.
Data Structures and Algorithms
no code implementations • 24 Apr 2020 • Anindya De, Elchanan Mossel, Joe Neeman
Using our techniques, we also obtain a fully noise tolerant tester with the same query complexity for any class $\mathcal{C}$ of linear $k$-juntas with surface area bounded by $s$.
no code implementations • 2 Jul 2019 • Clément L. Canonne, Anindya De, Rocco A. Servedio
We give a range of efficient algorithms and hardness results for this problem, focusing on the case when $f$ is a low-degree polynomial threshold function (PTF).
no code implementations • 9 Nov 2018 • Anindya De, Philip M. Long, Rocco A. Servedio
This implies that, for constant $d$, multivariate log-concave distributions can be learned in $\tilde{O}_d(1/\epsilon^{2d+2})$ time using $\tilde{O}_d(1/\epsilon^{d+2})$ samples, answering a question of [Diakonikolas, Kane and Stewart, 2016] All of our results extend to a model of noise-tolerant density estimation using Huber's contamination model, in which the target distribution to be learned is a $(1-\epsilon,\epsilon)$ mixture of some unknown distribution in the class with some other arbitrary and unknown distribution, and the learning algorithm must output a hypothesis distribution with total variation distance error $O(\epsilon)$ from the target distribution.
no code implementations • 3 Nov 2018 • Anindya De, Ryan O'Donnell, Rocco Servedio
We study the problem of learning an unknown mixture of $k$ rankings over $n$ elements, given access to noisy samples drawn from the unknown mixture.
no code implementations • 18 Jul 2018 • Anindya De, Philip M. Long, Rocco A. Servedio
For the case $| \mathcal{A} | = 3$, we give an algorithm for learning $\mathcal{A}$-sums to accuracy $\epsilon$ that uses $\mathsf{poly}(1/\epsilon)$ samples and runs in time $\mathsf{poly}(1/\epsilon)$, independent of $N$ and of the elements of $\mathcal{A}$.
no code implementations • 4 Mar 2017 • Anindya De, Ryan O'Donnell, Rocco Servedio
The population recovery problem is a basic problem in noisy unsupervised learning that has attracted significant research attention in recent years [WY12, DRWY12, MS13, BIMP13, LZ15, DST16].
no code implementations • 9 Dec 2016 • Anindya De, Ryan O'Donnell, Rocco Servedio
For any constant deletion rate $0 < \delta < 1$, we give a mean-based algorithm that uses $\exp(O(n^{1/3}))$ time and traces; we also prove that any mean-based algorithm must use at least $\exp(\Omega(n^{1/3}))$ traces.
no code implementations • 24 Feb 2016 • Anindya De, Michael Saks, Sijian Tang
We show that for $\mu > 0$, the sample complexity (and hence the algorithmic complexity) is bounded by a polynomial in $k$, $n$ and $1/\varepsilon$ improving upon the previous best result of $\mathsf{poly}(k^{\log\log k}, n, 1/\varepsilon)$ due to Lovett and Zhang.
no code implementations • 11 Nov 2015 • Constantinos Daskalakis, Anindya De, Gautam Kamath, Christos Tzamos
Finally, leveraging the structural properties of the Fourier spectrum of PMDs we show that these distributions can be learned from $O_k(1/\varepsilon^2)$ samples in ${\rm poly}_k(1/\varepsilon)$-time, removing the quasi-polynomial dependence of the running time on $1/\varepsilon$ from the algorithm of Daskalakis, Kamath, and Tzamos.
no code implementations • 7 Nov 2012 • Anindya De, Ilias Diakonikolas, Rocco A. Servedio
In such an inverse problem, the algorithm is given uniform random satisfying assignments of an unknown function $f$ belonging to a class $\C$ of Boolean functions, and the goal is to output a probability distribution $D$ which is $\epsilon$-close, in total variation distance, to the uniform distribution over $f^{-1}(1)$.