Search Results for author: Anindya De

Found 13 papers, 0 papers with code

Approximate optimization of convex functions with outlier noise

no code implementations NeurIPS 2021 Anindya De, Sanjeev Khanna, Huan Li, MohammadHesam NikpeySalekde

We study the problem of minimizing a convex function given by a zeroth order oracle that is possibly corrupted by {\em outlier noise}.

Quantitative Correlation Inequalities via Semigroup Interpolation

no code implementations22 Dec 2020 Anindya De, Shivam Nadimpalli, Rocco A. Servedio

Most correlation inequalities for high-dimensional functions in the literature, such as the Fortuin-Kasteleyn-Ginibre (FKG) inequality and the celebrated Gaussian Correlation Inequality of Royen, are qualitative statements which establish that any two functions of a certain type have non-negative correlation.

Probability Computational Complexity Combinatorics

Learning a mixture of two subspaces over finite fields

no code implementations6 Oct 2020 Aidao Chen, Anindya De, Aravindan Vijayaraghavan

We study the problem of learning a mixture of two subspaces over $\mathbb{F}_2^n$.

Data Structures and Algorithms

Robust testing of low-dimensional functions

no code implementations24 Apr 2020 Anindya De, Elchanan Mossel, Joe Neeman

Using our techniques, we also obtain a fully noise tolerant tester with the same query complexity for any class $\mathcal{C}$ of linear $k$-juntas with surface area bounded by $s$.

Model Compression

Learning from satisfying assignments under continuous distributions

no code implementations2 Jul 2019 Clément L. Canonne, Anindya De, Rocco A. Servedio

We give a range of efficient algorithms and hardness results for this problem, focusing on the case when $f$ is a low-degree polynomial threshold function (PTF).

Density estimation for shift-invariant multidimensional distributions

no code implementations9 Nov 2018 Anindya De, Philip M. Long, Rocco A. Servedio

This implies that, for constant $d$, multivariate log-concave distributions can be learned in $\tilde{O}_d(1/\epsilon^{2d+2})$ time using $\tilde{O}_d(1/\epsilon^{d+2})$ samples, answering a question of [Diakonikolas, Kane and Stewart, 2016] All of our results extend to a model of noise-tolerant density estimation using Huber's contamination model, in which the target distribution to be learned is a $(1-\epsilon,\epsilon)$ mixture of some unknown distribution in the class with some other arbitrary and unknown distribution, and the learning algorithm must output a hypothesis distribution with total variation distance error $O(\epsilon)$ from the target distribution.

Density Estimation

Learning sparse mixtures of rankings from noisy information

no code implementations3 Nov 2018 Anindya De, Ryan O'Donnell, Rocco Servedio

We study the problem of learning an unknown mixture of $k$ rankings over $n$ elements, given access to noisy samples drawn from the unknown mixture.

Learning Sums of Independent Random Variables with Sparse Collective Support

no code implementations18 Jul 2018 Anindya De, Philip M. Long, Rocco A. Servedio

For the case $| \mathcal{A} | = 3$, we give an algorithm for learning $\mathcal{A}$-sums to accuracy $\epsilon$ that uses $\mathsf{poly}(1/\epsilon)$ samples and runs in time $\mathsf{poly}(1/\epsilon)$, independent of $N$ and of the elements of $\mathcal{A}$.

Sharp bounds for population recovery

no code implementations4 Mar 2017 Anindya De, Ryan O'Donnell, Rocco Servedio

The population recovery problem is a basic problem in noisy unsupervised learning that has attracted significant research attention in recent years [WY12, DRWY12, MS13, BIMP13, LZ15, DST16].

Optimal mean-based algorithms for trace reconstruction

no code implementations9 Dec 2016 Anindya De, Ryan O'Donnell, Rocco Servedio

For any constant deletion rate $0 < \delta < 1$, we give a mean-based algorithm that uses $\exp(O(n^{1/3}))$ time and traces; we also prove that any mean-based algorithm must use at least $\exp(\Omega(n^{1/3}))$ traces.

Noisy population recovery in polynomial time

no code implementations24 Feb 2016 Anindya De, Michael Saks, Sijian Tang

We show that for $\mu > 0$, the sample complexity (and hence the algorithmic complexity) is bounded by a polynomial in $k$, $n$ and $1/\varepsilon$ improving upon the previous best result of $\mathsf{poly}(k^{\log\log k}, n, 1/\varepsilon)$ due to Lovett and Zhang.

A Size-Free CLT for Poisson Multinomials and its Applications

no code implementations11 Nov 2015 Constantinos Daskalakis, Anindya De, Gautam Kamath, Christos Tzamos

Finally, leveraging the structural properties of the Fourier spectrum of PMDs we show that these distributions can be learned from $O_k(1/\varepsilon^2)$ samples in ${\rm poly}_k(1/\varepsilon)$-time, removing the quasi-polynomial dependence of the running time on $1/\varepsilon$ from the algorithm of Daskalakis, Kamath, and Tzamos.

Inverse problems in approximate uniform generation

no code implementations7 Nov 2012 Anindya De, Ilias Diakonikolas, Rocco A. Servedio

In such an inverse problem, the algorithm is given uniform random satisfying assignments of an unknown function $f$ belonging to a class $\C$ of Boolean functions, and the goal is to output a probability distribution $D$ which is $\epsilon$-close, in total variation distance, to the uniform distribution over $f^{-1}(1)$.

Cannot find the paper you are looking for? You can Submit a new open access paper.