1 code implementation • 11 Apr 2024 • Gabriel Arpino, Xiaoqi Liu, Ramji Venkataramanan
We consider the problem of localizing change points in high-dimensional linear regression.
no code implementations • 28 Aug 2023 • Yihan Zhang, Hong Chang Ji, Ramji Venkataramanan, Marco Mondelli
Our main result is a precise asymptotic characterization of the performance of spectral estimators.
no code implementations • 5 Apr 2023 • Nelvin Tan, Ramji Venkataramanan
For max-affine regression, we propose an algorithm that combines AMP with expectation-maximization to estimate intercepts of the model along with the signals.
no code implementations • 3 Mar 2023 • Gabriel Arpino, Ramji Venkataramanan
Via a simple reduction, this provides novel rigorous evidence for the existence of a computational barrier to solving exact support recovery in sparse phase retrieval with sample complexity $n = \tilde{o}(k^2)$.
no code implementations • 21 Nov 2022 • Yihan Zhang, Marco Mondelli, Ramji Venkataramanan
In a mixed generalized linear model, the objective is to learn multiple signals from unlabeled observations: each sample comes from exactly one signal, but it is not known which one.
no code implementations • 8 Dec 2021 • Ramji Venkataramanan, Kevin Kögler, Marco Mondelli
We consider the problem of signal estimation in generalized linear models defined via rotationally invariant design matrices.
no code implementations • NeurIPS 2021 • Marco Mondelli, Ramji Venkataramanan
However, the existing analysis of AMP requires an initialization that is both correlated with the signal and independent of the noise, which is often unrealistic in practice.
no code implementations • 7 Oct 2020 • Marco Mondelli, Ramji Venkataramanan
We consider the problem of estimating a signal from measurements obtained via a generalized linear model.
no code implementations • 7 Aug 2020 • Marco Mondelli, Christos Thrampoulidis, Ramji Venkataramanan
This allows us to compute the Bayes-optimal combination of $\hat{\boldsymbol x}^{\rm L}$ and $\hat{\boldsymbol x}^{\rm s}$, given the limiting distribution of the signal $\boldsymbol x$.
1 code implementation • 6 Nov 2017 • Andrea Montanari, Ramji Venkataramanan
In this paper we present a practical algorithm that can achieve Bayes-optimal accuracy above the spectral threshold.
no code implementations • 28 Jul 2017 • Pavan Srinath, Ramji Venkataramanan
An empirical Bayes shrinkage estimator, derived using a Bernoulli-Gaussian prior, is analyzed and compared with the well-known soft-thresholding estimator.
no code implementations • 14 Jun 2017 • Ramji Venkataramanan, Oliver Johnson
In statistical inference problems, we wish to obtain lower bounds on the minimax risk, that is to bound the performance of any possible estimator.
1 code implementation • 18 May 2017 • Mahed Abroshan, Ramji Venkataramanan, Albert Guillen i Fabregas
Consider two remote nodes, each having a binary sequence.
Information Theory Information Theory
no code implementations • 6 Jun 2016 • Cynthia Rush, Ramji Venkataramanan
The concentration inequality also indicates that the number of AMP iterations $t$ can grow no faster than order $\frac{\log n}{\log \log n}$ for the performance to be close to the state evolution predictions with high probability.
no code implementations • 1 Feb 2016 • K. Pavan Srinath, Ramji Venkataramanan
The JS-estimator shrinks the observed vector towards the origin, and the risk reduction over the ML-estimator is greatest for $\boldsymbol{\theta}$ that lie close to the origin.
no code implementations • 23 Jan 2015 • Cynthia Rush, Adam Greig, Ramji Venkataramanan
Sparse superposition codes were recently introduced by Barron and Joseph for reliable communication over the AWGN channel at rates approaching the channel capacity.
no code implementations • 7 Dec 2012 • Ramji Venkataramanan, Tuhin Sarkar, Sekhar Tatikonda
The proposed encoding algorithm sequentially chooses columns of the design matrix to successively approximate the source sequence.
no code implementations • 3 Feb 2012 • Ramji Venkataramanan, Antony Joseph, Sekhar Tatikonda
We study a new class of codes for lossy compression with the squared-error distortion criterion, designed using the statistical framework of high-dimensional linear regression.