Search Results for author: Kamiar Rahnama Rad

Found 6 papers, 2 papers with code

A scalable estimate of the extra-sample prediction error via approximate leave-one-out

2 code implementations30 Jan 2018 Kamiar Rahnama Rad, Arian Maleki

Motivated by the low bias of the leave-one-out cross validation (LO) method, we propose a computationally efficient closed-form approximate leave-one-out formula (ALO) for a large class of regularized estimators.

Methodology

Robust and scalable Bayesian analysis of spatial neural tuning function data

no code implementations24 Jun 2016 Kamiar Rahnama Rad, Timothy A. Machado, Liam Paninski

On the other hand, sharing information between adjacent neurons can errantly degrade estimates of tuning functions across space if there are sharp discontinuities in tuning between nearby neurons.

Consistent Risk Estimation in Moderately High-Dimensional Linear Regression

no code implementations5 Feb 2019 Ji Xu, Arian Maleki, Kamiar Rahnama Rad, Daniel Hsu

This paper studies the problem of risk estimation under the moderately high-dimensional asymptotic setting $n, p \rightarrow \infty$ and $n/p \rightarrow \delta>1$ ($\delta$ is a fixed number), and proves the consistency of three risk estimates that have been successful in numerical studies, i. e., leave-one-out cross validation (LOOCV), approximate leave-one-out (ALO), and approximate message passing (AMP)-based techniques.

regression Vocal Bursts Intensity Prediction

Error bounds in estimating the out-of-sample prediction error using leave-one-out cross validation in high-dimensions

1 code implementation3 Mar 2020 Kamiar Rahnama Rad, Wenda Zhou, Arian Maleki

We study the problem of out-of-sample risk estimation in the high dimensional regime where both the sample size $n$ and number of features $p$ are large, and $n/p$ can be less than one.

regression

Approximate Leave-one-out Cross Validation for Regression with $\ell_1$ Regularizers (extended version)

no code implementations26 Oct 2023 Arnab Auddy, Haolin Zou, Kamiar Rahnama Rad, Arian Maleki

Recent theoretical work showed that approximate leave-one-out cross validation (ALO) is a computationally efficient and statistically reliable estimate of LO (and OO) for generalized linear models with differentiable regularizers.

Model Selection regression

Theoretical Analysis of Leave-one-out Cross Validation for Non-differentiable Penalties under High-dimensional Settings

no code implementations13 Feb 2024 Haolin Zou, Arnab Auddy, Kamiar Rahnama Rad, Arian Maleki

Despite a large and significant body of recent work focused on estimating the out-of-sample risk of regularized models in the high dimensional regime, a theoretical understanding of this problem for non-differentiable penalties such as generalized LASSO and nuclear norm is missing.

Cannot find the paper you are looking for? You can Submit a new open access paper.