no code implementations • ICML 2020 • Fariborz Salehi, Ehsan Abbasi, Babak Hassibi
The performance of hard-margin SVM has been recently analyzed in~\cite{montanari2019generalization, deng2019model}.
no code implementations • 29 Oct 2020 • Fariborz Salehi, Ehsan Abbasi, Babak Hassibi
We also provide a detailed study for three special cases: ($1$) $\ell_2$-GMM that is the max-margin classifier, ($2$) $\ell_1$-GMM which encourages sparsity, and ($3$) $\ell_{\infty}$-GMM which is often used when the parameter vector has binary entries.
no code implementations • NeurIPS 2019 • Fariborz Salehi, Ehsan Abbasi, Babak Hassibi
In both cases, we obtain explicit expressions for various performance metrics and can find the values of the regularizer parameter that optimizes the desired performance.
no code implementations • NeurIPS 2018 • Fariborz Salehi, Ehsan Abbasi, Babak Hassibi
The problem of estimating an unknown signal, $\mathbf x_0\in \mathbb R^n$, from a vector $\mathbf y\in \mathbb R^m$ consisting of $m$ magnitude-only measurements of the form $y_i=|\mathbf a_i\mathbf x_0|$, where $\mathbf a_i$'s are the rows of a known measurement matrix $\mathbf A$ is a classical problem known as phase retrieval.
no code implementations • NeurIPS 2015 • Christos Thrampoulidis, Ehsan Abbasi, Babak Hassibi
In this work, we considerably strengthen these results by obtaining explicit expressions for $\|\hat x-\mu x_0\|_2$, for the regularized Generalized-LASSO, that are asymptotically precise when $m$ and $n$ grow large.