1 code implementation • 31 Dec 2019 • Abbas Kazemipour, Brett W. Larsen, Shaul Druckmann
Despite their practical success, a theoretical understanding of the loss landscape of neural networks has proven challenging due to the high-dimensional, non-convex, and highly nonlinear structure of such models.
no code implementations • 20 Oct 2016 • Abbas Kazemipour, Ji Liu, Patrick Kanold, Min Wu, Behtash Babadi
In this paper, we consider linear state-space models with compressible innovations and convergent transition matrices in order to model spatiotemporally sparse transient events.
no code implementations • 4 May 2016 • Abbas Kazemipour, Sina Miran, Piya Pal, Behtash Babadi, Min Wu
Assuming that the parameters are compressible, we analyze the performance of the $\ell_1$-regularized least squares as well as a greedy estimator of the parameters and characterize the sampling trade-offs required for stable recovery in the non-asymptotic regime.
1 code implementation • 14 Jul 2015 • Abbas Kazemipour, Min Wu, Behtash Babadi
We consider the problem of estimating self-exciting generalized linear models from limited binary observations, where the history of the process serves as the covariate.