no code implementations • 16 Jan 2024 • Shuheng Zhou
In particular, we analyze computational efficient algorithms proposed by the same author, to partition data into two groups approximately according to their population of origin given a small sample.
no code implementations • 7 Apr 2019 • Roger Fan, Byoungwook Jang, Yuekai Sun, Shuheng Zhou
Estimating conditional dependence graphs and precision matrices are some of the most common problems in modern statistics and machine learning.
no code implementations • 15 Nov 2016 • Mark Rudelson, Shuheng Zhou
Under sparsity and restrictive eigenvalue type of conditions, we show that one is able to recover a sparse vector $\beta^* \in \mathbb{R}^m$ from the model given a single observation matrix $X$ and the response vector $y$.
no code implementations • 13 Nov 2016 • Michael Hornstein, Roger Fan, Kerby Shedden, Shuheng Zhou
It has been proposed that complex populations, such as those that arise in genomics studies, may exhibit dependencies among observations as well as among variables.
no code implementations • 9 Feb 2015 • Mark Rudelson, Shuheng Zhou
Suppose that we observe $y \in \mathbb{R}^f$ and $X \in \mathbb{R}^{f \times m}$ in the following errors-in-variables model: \begin{eqnarray*} y & = & X_0 \beta^* + \epsilon \\ X & = & X_0 + W \end{eqnarray*} where $X_0$ is a $f \times m$ design matrix with independent subgaussian row vectors, $\epsilon \in \mathbb{R}^f$ is a noise vector and $W$ is a mean zero $f \times m$ random noise matrix with independent subgaussian column vectors, independent of $X_0$ and $\epsilon$.
no code implementations • 23 Sep 2012 • Shuheng Zhou
Under sparsity conditions, we show that one is able to recover the graphs and covariance matrices with a single random matrix from the matrix variate normal distribution.
no code implementations • 3 Apr 2012 • Theodoros Tsiligkaridis, Alfred O. Hero III, Shuheng Zhou
The KGlasso algorithm generalizes Glasso, introduced by Yuan and Lin ["Model selection and estimation in the Gaussian graphical model," Biometrika, vol.
no code implementations • NeurIPS 2009 • Shuheng Zhou
Given $n$ noisy samples with $p$ dimensions, where $n \ll p$, we show that the multi-stage thresholding procedures can accurately estimate a sparse vector $\beta \in \R^p$ in a linear model, under the restricted eigenvalue conditions (Bickel-Ritov-Tsybakov 09).
no code implementations • NeurIPS 2007 • Shuheng Zhou, Larry Wasserman, John D. Lafferty
Recent research has studied the role of sparsity in high dimensional regression and signal reconstruction, establishing theoretical limits for recovering sparse models from sparse data.