no code implementations • 21 Dec 2022 • Cynthia Rush, Fiona Skerman, Alexander S. Wein, Dana Yang
In particular, we consider certain hypothesis testing problems between models with different community structures, and we show (in the low-degree polynomial framework) that testing between two options is as hard as finding the communities.
no code implementations • 10 May 2022 • Martina Cardone, Alex Dytso, Cynthia Rush
It is well known that central order statistics exhibit a central limit behavior and converge to a Gaussian distribution as the sample size grows.
1 code implementation • 27 May 2021 • Zhiqi Bu, Jason Klusowski, Cynthia Rush, Weijie J. Su
Sorted l1 regularization has been incorporated into many methods for solving high-dimensional statistical estimation problems, including the SLOPE estimator in linear regression.
no code implementations • 16 Apr 2021 • Marco Avella Medina, José Luis Montiel Olea, Cynthia Rush, Amilcar Velez
To make this point, we derive a Bernstein-von Mises theorem showing convergence in total variation distance of $\alpha$-posteriors and their variational approximations to limiting Gaussian distributions.
no code implementations • NeurIPS 2020 • Jean Barbier, Nicolas Macris, Cynthia Rush
We determine statistical and computational limits for estimation of a rank-one matrix (the spike) corrupted by an additive gaussian noise matrix, in a sparse limit, where the underlying hidden vector (that constructs the rank-one matrix) has a number of non-zero components that scales sub-linearly with the total dimension of the vector, and the signal-to-noise ratio tends to infinity at an appropriate speed.
no code implementations • 25 Mar 2020 • Hangjin Liu, Cynthia Rush, Dror Baron
For this reason, a novel algorithmic framework that incorporates SI into AMP, referred to as approximate message passing with side information (AMP-SI), has been recently introduced.
1 code implementation • NeurIPS 2019 • Zhiqi Bu, Jason Klusowski, Cynthia Rush, Weijie Su
SLOPE is a relatively new convex optimization procedure for high-dimensional linear regression via the sorted l1 penalty: the larger the rank of the fitted coefficient, the larger the penalty.
no code implementations • 1 Feb 2019 • Hangjin Liu, Cynthia Rush, Dror Baron
For this reason a novel algorithmic framework that incorporates SI into AMP, referred to as approximate message passing with side information (AMP-SI), has been recently introduced.
Information Theory Information Theory
no code implementations • 6 Jun 2016 • Cynthia Rush, Ramji Venkataramanan
The concentration inequality also indicates that the number of AMP iterations $t$ can grow no faster than order $\frac{\log n}{\log \log n}$ for the performance to be close to the state evolution predictions with high probability.
no code implementations • 23 Jan 2015 • Cynthia Rush, Adam Greig, Ramji Venkataramanan
Sparse superposition codes were recently introduced by Barron and Joseph for reliable communication over the AWGN channel at rates approaching the channel capacity.