Search Results for author: Arian Maleki

Found 19 papers, 7 papers with code

Impact of the Sensing Spectrum on Signal Recovery in Generalized Linear Models

no code implementations NeurIPS 2021 Junjie Ma, Ji Xu, Arian Maleki

We define a notion for the spikiness of the spectrum of $\mathbf{A}$ and show the importance of this measure in the performance of the EP.

Retrieval

Analysis of Sensing Spectral for Signal Recovery under a Generalized Linear Model

no code implementations NeurIPS 2021 Junjie Ma, Ji Xu, Arian Maleki

We define a notion for the spikiness of the spectrum of $\mathbf{A}$ and show the importance of this measure in the performance of the EP.

Retrieval

Optimal Data Detection and Signal Estimation in Systems with Input Noise

no code implementations5 Aug 2020 Ramina Ghods, Charles Jeon, Arian Maleki, Christoph Studer

Practical systems often suffer from hardware impairments that already appear during signal generation.

Compressive Sensing

Sharp Concentration Results for Heavy-Tailed Distributions

no code implementations30 Mar 2020 Milad Bakhshizadeh, Arian Maleki, Victor H. de la Pena

We obtain concentration and large deviation for the sums of independent and identically distributed random variables with heavy-tailed distributions.

Error bounds in estimating the out-of-sample prediction error using leave-one-out cross validation in high-dimensions

1 code implementation3 Mar 2020 Kamiar Rahnama Rad, Wenda Zhou, Arian Maleki

We study the problem of out-of-sample risk estimation in the high dimensional regime where both the sample size $n$ and number of features $p$ are large, and $n/p$ can be less than one.

regression

Does SLOPE outperform bridge regression?

no code implementations20 Sep 2019 Shuaiwen Wang, Haolei Weng, Arian Maleki

A recently proposed SLOPE estimator (arXiv:1407. 3824) has been shown to adaptively achieve the minimax $\ell_2$ estimation rate under high-dimensional sparse linear regression models (arXiv:1503. 08393).

regression

Consistent Risk Estimation in Moderately High-Dimensional Linear Regression

no code implementations5 Feb 2019 Ji Xu, Arian Maleki, Kamiar Rahnama Rad, Daniel Hsu

This paper studies the problem of risk estimation under the moderately high-dimensional asymptotic setting $n, p \rightarrow \infty$ and $n/p \rightarrow \delta>1$ ($\delta$ is a fixed number), and proves the consistency of three risk estimates that have been successful in numerical studies, i. e., leave-one-out cross validation (LOOCV), approximate leave-one-out (ALO), and approximate message passing (AMP)-based techniques.

regression

Benefits of over-parameterization with EM

no code implementations NeurIPS 2018 Ji Xu, Daniel Hsu, Arian Maleki

Expectation Maximization (EM) is among the most popular algorithms for maximum likelihood estimation, but it is generally only guaranteed to find its stationary points of the log-likelihood objective.

Approximate Leave-One-Out for Fast Parameter Tuning in High Dimensions

2 code implementations ICML 2018 Shuaiwen Wang, Wenda Zhou, Haihao Lu, Arian Maleki, Vahab Mirrokni

Consider the following class of learning schemes: $$\hat{\boldsymbol{\beta}} := \arg\min_{\boldsymbol{\beta}}\;\sum_{j=1}^n \ell(\boldsymbol{x}_j^\top\boldsymbol{\beta}; y_j) + \lambda R(\boldsymbol{\beta}),\qquad\qquad (1) $$ where $\boldsymbol{x}_i \in \mathbb{R}^p$ and $y_i \in \mathbb{R}$ denote the $i^{\text{th}}$ feature and response variable respectively.

Approximate message passing for amplitude based optimization

no code implementations ICML 2018 Junjie Ma, Ji Xu, Arian Maleki

We consider an $\ell_2$-regularized non-convex optimization problem for recovering signals from their noisy phaseless observations.

A scalable estimate of the extra-sample prediction error via approximate leave-one-out

2 code implementations30 Jan 2018 Kamiar Rahnama Rad, Arian Maleki

Motivated by the low bias of the leave-one-out cross validation (LO) method, we propose a computationally efficient closed-form approximate leave-one-out formula (ALO) for a large class of regularized estimators.

Methodology

Global analysis of Expectation Maximization for mixtures of two Gaussians

no code implementations NeurIPS 2016 Ji Xu, Daniel Hsu, Arian Maleki

Expectation Maximization (EM) is among the most popular algorithms for estimating parameters of statistical models.

Consistent Parameter Estimation for LASSO and Approximate Message Passing

no code implementations3 Nov 2015 Ali Mousavi, Arian Maleki, Richard G. Baraniuk

For instance the following basic questions have not yet been studied in the literature: (i) How does the size of the active set $\|\hat{\beta}^\lambda\|_0/p$ behave as a function of $\lambda$?

From Denoising to Compressed Sensing

2 code implementations16 Jun 2014 Christopher A. Metzler, Arian Maleki, Richard G. Baraniuk

A key element in D-AMP is the use of an appropriate Onsager correction term in its iterations, which coerces the signal perturbation at each iteration to be very close to the white Gaussian noise that denoisers are typically designed to remove.

Denoising

Parameterless Optimal Approximate Message Passing

no code implementations31 Oct 2013 Ali Mousavi, Arian Maleki, Richard G. Baraniuk

In particular, both the final reconstruction error and the convergence rate of the algorithm crucially rely on how the threshold parameter is set at each step of the algorithm.

Compressive Sensing

Asymptotic Analysis of LASSOs Solution Path with Implications for Approximate Message Passing

no code implementations23 Sep 2013 Ali Mousavi, Arian Maleki, Richard G. Baraniuk

This paper concerns the performance of the LASSO (also knows as basis pursuit denoising) for recovering sparse signals from undersampled, randomized, noisy measurements.

Denoising

Iterative Thresholding Algorithm for Sparse Inverse Covariance Estimation

1 code implementation NeurIPS 2012 Dominique Guillot, Bala Rajaratnam, Benjamin T. Rolfs, Arian Maleki, Ian Wong

In this paper, a proximal gradient method (G-ISTA) for performing L1-regularized covariance matrix estimation is presented.

The Noise-Sensitivity Phase Transition in Compressed Sensing

1 code implementation8 Apr 2010 David L. Donoho, Arian Maleki, Andrea Montanari

We develop formal expressions for the MSE of \hxl, and evaluate its worst-case formal noise sensitivity over all types of k-sparse signals.

Statistics Theory Information Theory Information Theory Statistics Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.