Search Results for author: Arian Maleki

Found 22 papers, 8 papers with code

Bagged Deep Image Prior for Recovering Images in the Presence of Speckle Noise

1 code implementation23 Feb 2024 Xi Chen, Zhewen Hou, Christopher A. Metzler, Arian Maleki, Shirin Jalali

We investigate both the theoretical and algorithmic aspects of likelihood-based methods for recovering a complex-valued signal from multiple sets of measurements, referred to as looks, affected by speckle (multiplicative) noise.

Theoretical Analysis of Leave-one-out Cross Validation for Non-differentiable Penalties under High-dimensional Settings

no code implementations13 Feb 2024 Haolin Zou, Arnab Auddy, Kamiar Rahnama Rad, Arian Maleki

Despite a large and significant body of recent work focused on estimating the out-of-sample risk of regularized models in the high dimensional regime, a theoretical understanding of this problem for non-differentiable penalties such as generalized LASSO and nuclear norm is missing.

Approximate Leave-one-out Cross Validation for Regression with $\ell_1$ Regularizers (extended version)

no code implementations26 Oct 2023 Arnab Auddy, Haolin Zou, Kamiar Rahnama Rad, Arian Maleki

Recent theoretical work showed that approximate leave-one-out cross validation (ALO) is a computationally efficient and statistically reliable estimate of LO (and OO) for generalized linear models with differentiable regularizers.

Model Selection regression

Towards Designing Optimal Sensing Matrices for Generalized Linear Inverse Problems

no code implementations NeurIPS 2021 Junjie Ma, Ji Xu, Arian Maleki

We consider an inverse problem $\mathbf{y}= f(\mathbf{Ax})$, where $\mathbf{x}\in\mathbb{R}^n$ is the signal of interest, $\mathbf{A}$ is the sensing matrix, $f$ is a nonlinear function and $\mathbf{y} \in \mathbb{R}^m$ is the measurement vector.

Retrieval

Analysis of Sensing Spectral for Signal Recovery under a Generalized Linear Model

no code implementations NeurIPS 2021 Junjie Ma, Ji Xu, Arian Maleki

We define a notion for the spikiness of the spectrum of $\mathbf{A}$ and show the importance of this measure in the performance of the EP.

Retrieval

Optimal Data Detection and Signal Estimation in Systems with Input Noise

no code implementations5 Aug 2020 Ramina Ghods, Charles Jeon, Arian Maleki, Christoph Studer

Practical systems often suffer from hardware impairments that already appear during signal generation.

Compressive Sensing

Sharp Concentration Results for Heavy-Tailed Distributions

no code implementations30 Mar 2020 Milad Bakhshizadeh, Arian Maleki, Victor H. de la Pena

We obtain concentration and large deviation for the sums of independent and identically distributed random variables with heavy-tailed distributions.

Error bounds in estimating the out-of-sample prediction error using leave-one-out cross validation in high-dimensions

1 code implementation3 Mar 2020 Kamiar Rahnama Rad, Wenda Zhou, Arian Maleki

We study the problem of out-of-sample risk estimation in the high dimensional regime where both the sample size $n$ and number of features $p$ are large, and $n/p$ can be less than one.

regression

Does SLOPE outperform bridge regression?

no code implementations20 Sep 2019 Shuaiwen Wang, Haolei Weng, Arian Maleki

A recently proposed SLOPE estimator (arXiv:1407. 3824) has been shown to adaptively achieve the minimax $\ell_2$ estimation rate under high-dimensional sparse linear regression models (arXiv:1503. 08393).

regression

Consistent Risk Estimation in Moderately High-Dimensional Linear Regression

no code implementations5 Feb 2019 Ji Xu, Arian Maleki, Kamiar Rahnama Rad, Daniel Hsu

This paper studies the problem of risk estimation under the moderately high-dimensional asymptotic setting $n, p \rightarrow \infty$ and $n/p \rightarrow \delta>1$ ($\delta$ is a fixed number), and proves the consistency of three risk estimates that have been successful in numerical studies, i. e., leave-one-out cross validation (LOOCV), approximate leave-one-out (ALO), and approximate message passing (AMP)-based techniques.

regression Vocal Bursts Intensity Prediction

Benefits of over-parameterization with EM

no code implementations NeurIPS 2018 Ji Xu, Daniel Hsu, Arian Maleki

Expectation Maximization (EM) is among the most popular algorithms for maximum likelihood estimation, but it is generally only guaranteed to find its stationary points of the log-likelihood objective.

Approximate Leave-One-Out for Fast Parameter Tuning in High Dimensions

2 code implementations ICML 2018 Shuaiwen Wang, Wenda Zhou, Haihao Lu, Arian Maleki, Vahab Mirrokni

Consider the following class of learning schemes: $$\hat{\boldsymbol{\beta}} := \arg\min_{\boldsymbol{\beta}}\;\sum_{j=1}^n \ell(\boldsymbol{x}_j^\top\boldsymbol{\beta}; y_j) + \lambda R(\boldsymbol{\beta}),\qquad\qquad (1) $$ where $\boldsymbol{x}_i \in \mathbb{R}^p$ and $y_i \in \mathbb{R}$ denote the $i^{\text{th}}$ feature and response variable respectively.

Vocal Bursts Intensity Prediction

Approximate message passing for amplitude based optimization

no code implementations ICML 2018 Junjie Ma, Ji Xu, Arian Maleki

We consider an $\ell_2$-regularized non-convex optimization problem for recovering signals from their noisy phaseless observations.

A scalable estimate of the extra-sample prediction error via approximate leave-one-out

2 code implementations30 Jan 2018 Kamiar Rahnama Rad, Arian Maleki

Motivated by the low bias of the leave-one-out cross validation (LO) method, we propose a computationally efficient closed-form approximate leave-one-out formula (ALO) for a large class of regularized estimators.

Methodology

Global analysis of Expectation Maximization for mixtures of two Gaussians

no code implementations NeurIPS 2016 Ji Xu, Daniel Hsu, Arian Maleki

Expectation Maximization (EM) is among the most popular algorithms for estimating parameters of statistical models.

Vocal Bursts Valence Prediction

Consistent Parameter Estimation for LASSO and Approximate Message Passing

no code implementations3 Nov 2015 Ali Mousavi, Arian Maleki, Richard G. Baraniuk

For instance the following basic questions have not yet been studied in the literature: (i) How does the size of the active set $\|\hat{\beta}^\lambda\|_0/p$ behave as a function of $\lambda$?

From Denoising to Compressed Sensing

2 code implementations16 Jun 2014 Christopher A. Metzler, Arian Maleki, Richard G. Baraniuk

A key element in D-AMP is the use of an appropriate Onsager correction term in its iterations, which coerces the signal perturbation at each iteration to be very close to the white Gaussian noise that denoisers are typically designed to remove.

Denoising

Parameterless Optimal Approximate Message Passing

no code implementations31 Oct 2013 Ali Mousavi, Arian Maleki, Richard G. Baraniuk

In particular, both the final reconstruction error and the convergence rate of the algorithm crucially rely on how the threshold parameter is set at each step of the algorithm.

Compressive Sensing

Asymptotic Analysis of LASSOs Solution Path with Implications for Approximate Message Passing

no code implementations23 Sep 2013 Ali Mousavi, Arian Maleki, Richard G. Baraniuk

This paper concerns the performance of the LASSO (also knows as basis pursuit denoising) for recovering sparse signals from undersampled, randomized, noisy measurements.

Denoising

Iterative Thresholding Algorithm for Sparse Inverse Covariance Estimation

1 code implementation NeurIPS 2012 Dominique Guillot, Bala Rajaratnam, Benjamin T. Rolfs, Arian Maleki, Ian Wong

In this paper, a proximal gradient method (G-ISTA) for performing L1-regularized covariance matrix estimation is presented.

The Noise-Sensitivity Phase Transition in Compressed Sensing

1 code implementation8 Apr 2010 David L. Donoho, Arian Maleki, Andrea Montanari

We develop formal expressions for the MSE of \hxl, and evaluate its worst-case formal noise sensitivity over all types of k-sparse signals.

Statistics Theory Information Theory Information Theory Statistics Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.