Search Results for author: Iraj Saniee

Found 4 papers, 0 papers with code

Efficient Deep Approximation of GMMs

no code implementations NeurIPS 2019 Shirin Jalali, Carl Nuzman, Iraj Saniee

The universal approximation theorem states that any regular function can be approximated closely using a single hidden layer neural network.

General Classification

Efficient Deep Learning of GMMs

no code implementations15 Feb 2019 Shirin Jalali, Carl Nuzman, Iraj Saniee

We show that a collection of Gaussian mixture models (GMMs) in $R^{n}$ can be optimally classified using $O(n)$ neurons in a neural network with two hidden layers (deep neural network), whereas in contrast, a neural network with a single hidden layer (shallow neural network) would require at least $O(\exp(n))$ neurons or possibly exponentially large coefficients.

General Classification

Linear Time Clustering for High Dimensional Mixtures of Gaussian Clouds

no code implementations19 Dec 2017 Dan Kushnir, Shirin Jalali, Iraj Saniee

Consequently, the expected overall running time of the algorithm is linear in $n$ and quasi-linear in $p$ at $o(\ln{p})O(np)$, and the sample complexity is independent of $p$.

Clustering Computational Efficiency +1

A New Family of Near-metrics for Universal Similarity

no code implementations21 Jul 2017 Chu Wang, Iraj Saniee, William S. Kennedy, Chris A. White

We show that for structured data including categorical and continuous data, the near-metrics corresponding to normalized forward k-step diffusion (k small) work as one of the best performing similarity measures; for vector representations of text and images including those extracted from deep learning, the near-metrics derived from normalized and reverse k-step graph diffusion (k very small) exhibit outstanding ability to distinguish data points from different classes.

Cannot find the paper you are looking for? You can Submit a new open access paper.