no code implementations • 4 Nov 2024 • Hassan Ashtiani, Mahbod Majid, Shyam Narayanan
We study the problem of learning mixtures of Gaussians with approximate differential privacy.
no code implementations • 5 Jul 2024 • Mohammad Afzali, Hassan Ashtiani, Christopher Liaw
We consider the problem of private density estimation for mixtures of unrestricted high dimensional Gaussians in the agnostic setting.
no code implementations • 9 Dec 2023 • Alireza F. Pour, Hassan Ashtiani, Shahab Asoodeh
Namely, it breaks the known lower bound of $\Omega\left(\frac{k\log k}{\alpha^2\min \{ \varepsilon^2 , 1\}} \right)$ for the sample complexity of non-interactive hypothesis selection.
no code implementations • 7 Sep 2023 • Mohammad Afzali, Hassan Ashtiani, Christopher Liaw
We study the problem of estimating mixtures of Gaussians under the constraint of differential privacy (DP).
no code implementations • 7 Mar 2023 • Jamil Arbas, Hassan Ashtiani, Christopher Liaw
We study the problem of privately estimating the parameters of $d$-dimensional Gaussian Mixture Models (GMMs) with $k$ components.
1 code implementation • 14 Jun 2022 • Alireza Fathollah Pour, Hassan Ashtiani
We observe that given two (compatible) classes of functions $\mathcal{F}$ and $\mathcal{H}$ with small capacity as measured by their uniform covering numbers, the capacity of the composition class $\mathcal{H} \circ \mathcal{F}$ can become prohibitively large or even unbounded.
no code implementations • 2 Mar 2022 • Hassan Ashtiani, Vinayak Pathak, Ruth Urner
In the tolerant version, the error of the learner is compared with the best achievable error with respect to a slightly larger perturbation radius $(1+\gamma)r$.
no code implementations • 22 Nov 2021 • Hassan Ashtiani, Christopher Liaw
As another application of our framework, we provide the first polynomial time $(\varepsilon, \delta)$-DP algorithm for robust learning of (unrestricted) Gaussians with sample complexity $\widetilde{O}(d^{3. 5})$.
no code implementations • NeurIPS 2021 • Ishaq Aden-Ali, Hassan Ashtiani, Christopher Liaw
We show that if $\mathcal{F}$ is privately list-decodable, then we can privately learn mixtures of distributions in $\mathcal{F}$.
no code implementations • 19 Oct 2020 • Ishaq Aden-Ali, Hassan Ashtiani, Gautam Kamath
These are the first finite sample upper bounds for general Gaussians which do not impose restrictions on the parameters of the distribution.
no code implementations • ICML 2020 • Hassan Ashtiani, Vinayak Pathak, Ruth Urner
We formally study the problem of classification under adversarial perturbations from a learner's perspective as well as a third-party who aims at certifying the robustness of a given black-box classifier.
no code implementations • 5 Dec 2019 • Ishaq Aden-Ali, Hassan Ashtiani
We show that the sample complexity of learning tree structured SPNs with the usual type of leaves (i. e., Gaussian or discrete) grows at most linearly (up to logarithmic factors) with the number of parameters of the SPN.
1 code implementation • NeurIPS 2019 • Amir Dezfouli, Hassan Ashtiani, Omar Ghattas, Richard Nock, Peter Dayan, Cheng Soon Ong
Individual characteristics in human decision-making are often quantified by fitting a parametric cognitive model to subjects' behavior and then studying differences between them in the associated parameter space.
no code implementations • NeurIPS 2018 • Hassan Ashtiani, Shai Ben-David, Nicholas Harvey, Christopher Liaw, Abbas Mehrabian, Yaniv Plan
We prove that ϴ(k d^2 / ε^2) samples are necessary and sufficient for learning a mixture of k Gaussians in R^d, up to error ε in total variation distance.
no code implementations • 11 Jan 2018 • Hassan Ashtiani, Abbas Mehrabian
Density estimation is an interdisciplinary topic at the intersection of statistics, theoretical computer science and machine learning.
no code implementations • 14 Oct 2017 • Hassan Ashtiani, Shai Ben-David, Nick Harvey, Christopher Liaw, Abbas Mehrabian, Yaniv Plan
We prove that $\tilde{\Theta}(k d^2 / \varepsilon^2)$ samples are necessary and sufficient for learning a mixture of $k$ Gaussians in $\mathbb{R}^d$, up to error $\varepsilon$ in total variation distance.
no code implementations • 6 Jun 2017 • Hassan Ashtiani, Shai Ben-David, Abbas Mehrabian
Let $\mathcal F$ be an arbitrary class of probability distributions, and let $\mathcal{F}^k$ denote the class of $k$-mixtures of elements of $\mathcal F$.
no code implementations • NeurIPS 2016 • Hassan Ashtiani, Shrinu Kushagra, Shai Ben-David
We show that there is a trade off between computational complexity and query complexity; We prove that for the case of $k$-means clustering (i. e., when the expert conforms to a solution of $k$-means), having access to relatively few such queries allows efficient solutions to otherwise NP hard problems.
no code implementations • 19 Jun 2015 • Hassan Ashtiani, Shai Ben-David
The algorithm designer then uses that sample to come up with a data representation under which $k$-means clustering results in a clustering (of the full data set) that is aligned with the user's clustering.