no code implementations • NeurIPS 2021 • Junjie Ma, Ji Xu, Arian Maleki
We define a notion for the spikiness of the spectrum of $\mathbf{A}$ and show the importance of this measure in the performance of the EP.
no code implementations • NeurIPS 2021 • Junjie Ma, Ji Xu, Arian Maleki
We define a notion for the spikiness of the spectrum of $\mathbf{A}$ and show the importance of this measure in the performance of the EP.
no code implementations • 5 Aug 2020 • Ramina Ghods, Charles Jeon, Arian Maleki, Christoph Studer
Practical systems often suffer from hardware impairments that already appear during signal generation.
no code implementations • 30 Mar 2020 • Milad Bakhshizadeh, Arian Maleki, Victor H. de la Pena
We obtain concentration and large deviation for the sums of independent and identically distributed random variables with heavy-tailed distributions.
1 code implementation • 3 Mar 2020 • Kamiar Rahnama Rad, Wenda Zhou, Arian Maleki
We study the problem of out-of-sample risk estimation in the high dimensional regime where both the sample size $n$ and number of features $p$ are large, and $n/p$ can be less than one.
no code implementations • 20 Sep 2019 • Shuaiwen Wang, Haolei Weng, Arian Maleki
A recently proposed SLOPE estimator (arXiv:1407. 3824) has been shown to adaptively achieve the minimax $\ell_2$ estimation rate under high-dimensional sparse linear regression models (arXiv:1503. 08393).
no code implementations • 5 Feb 2019 • Ji Xu, Arian Maleki, Kamiar Rahnama Rad, Daniel Hsu
This paper studies the problem of risk estimation under the moderately high-dimensional asymptotic setting $n, p \rightarrow \infty$ and $n/p \rightarrow \delta>1$ ($\delta$ is a fixed number), and proves the consistency of three risk estimates that have been successful in numerical studies, i. e., leave-one-out cross validation (LOOCV), approximate leave-one-out (ALO), and approximate message passing (AMP)-based techniques.
no code implementations • NeurIPS 2018 • Ji Xu, Daniel Hsu, Arian Maleki
Expectation Maximization (EM) is among the most popular algorithms for maximum likelihood estimation, but it is generally only guaranteed to find its stationary points of the log-likelihood objective.
1 code implementation • 4 Oct 2018 • Shuaiwen Wang, Wenda Zhou, Arian Maleki, Haihao Lu, Vahab Mirrokni
$\mathcal{C} \subset \mathbb{R}^{p}$ is a closed convex set.
2 code implementations • ICML 2018 • Shuaiwen Wang, Wenda Zhou, Haihao Lu, Arian Maleki, Vahab Mirrokni
Consider the following class of learning schemes: $$\hat{\boldsymbol{\beta}} := \arg\min_{\boldsymbol{\beta}}\;\sum_{j=1}^n \ell(\boldsymbol{x}_j^\top\boldsymbol{\beta}; y_j) + \lambda R(\boldsymbol{\beta}),\qquad\qquad (1) $$ where $\boldsymbol{x}_i \in \mathbb{R}^p$ and $y_i \in \mathbb{R}$ denote the $i^{\text{th}}$ feature and response variable respectively.
no code implementations • ICML 2018 • Junjie Ma, Ji Xu, Arian Maleki
We consider an $\ell_2$-regularized non-convex optimization problem for recovering signals from their noisy phaseless observations.
2 code implementations • 30 Jan 2018 • Kamiar Rahnama Rad, Arian Maleki
Motivated by the low bias of the leave-one-out cross validation (LO) method, we propose a computationally efficient closed-form approximate leave-one-out formula (ALO) for a large class of regularized estimators.
Methodology
no code implementations • NeurIPS 2016 • Ji Xu, Daniel Hsu, Arian Maleki
Expectation Maximization (EM) is among the most popular algorithms for estimating parameters of statistical models.
no code implementations • 3 Nov 2015 • Ali Mousavi, Arian Maleki, Richard G. Baraniuk
For instance the following basic questions have not yet been studied in the literature: (i) How does the size of the active set $\|\hat{\beta}^\lambda\|_0/p$ behave as a function of $\lambda$?
2 code implementations • 16 Jun 2014 • Christopher A. Metzler, Arian Maleki, Richard G. Baraniuk
A key element in D-AMP is the use of an appropriate Onsager correction term in its iterations, which coerces the signal perturbation at each iteration to be very close to the white Gaussian noise that denoisers are typically designed to remove.
no code implementations • 31 Oct 2013 • Ali Mousavi, Arian Maleki, Richard G. Baraniuk
In particular, both the final reconstruction error and the convergence rate of the algorithm crucially rely on how the threshold parameter is set at each step of the algorithm.
no code implementations • 23 Sep 2013 • Ali Mousavi, Arian Maleki, Richard G. Baraniuk
This paper concerns the performance of the LASSO (also knows as basis pursuit denoising) for recovering sparse signals from undersampled, randomized, noisy measurements.
1 code implementation • NeurIPS 2012 • Dominique Guillot, Bala Rajaratnam, Benjamin T. Rolfs, Arian Maleki, Ian Wong
In this paper, a proximal gradient method (G-ISTA) for performing L1-regularized covariance matrix estimation is presented.
1 code implementation • 8 Apr 2010 • David L. Donoho, Arian Maleki, Andrea Montanari
We develop formal expressions for the MSE of \hxl, and evaluate its worst-case formal noise sensitivity over all types of k-sparse signals.
Statistics Theory Information Theory Information Theory Statistics Theory