no code implementations • NeurIPS 2019 • Shirin Jalali, Carl Nuzman, Iraj Saniee
The universal approximation theorem states that any regular function can be approximated closely using a single hidden layer neural network.
no code implementations • 15 Feb 2019 • Shirin Jalali, Carl Nuzman, Iraj Saniee
We show that a collection of Gaussian mixture models (GMMs) in $R^{n}$ can be optimally classified using $O(n)$ neurons in a neural network with two hidden layers (deep neural network), whereas in contrast, a neural network with a single hidden layer (shallow neural network) would require at least $O(\exp(n))$ neurons or possibly exponentially large coefficients.
no code implementations • 19 Dec 2017 • Dan Kushnir, Shirin Jalali, Iraj Saniee
Consequently, the expected overall running time of the algorithm is linear in $n$ and quasi-linear in $p$ at $o(\ln{p})O(np)$, and the sample complexity is independent of $p$.
no code implementations • 21 Jul 2017 • Chu Wang, Iraj Saniee, William S. Kennedy, Chris A. White
We show that for structured data including categorical and continuous data, the near-metrics corresponding to normalized forward k-step diffusion (k small) work as one of the best performing similarity measures; for vector representations of text and images including those extracted from deep learning, the near-metrics derived from normalized and reverse k-step graph diffusion (k very small) exhibit outstanding ability to distinguish data points from different classes.