no code implementations • 17 Nov 2023 • Shabarish Chenakkod, Michał Dereziński, Xiaoyu Dong, Mark Rudelson
We use this to construct the first oblivious subspace embedding with $O(d)$ embedding dimension that can be applied faster than current matrix multiplication time, and to obtain an optimal single-pass algorithm for least squares regression.
no code implementations • 11 Oct 2021 • Cheng Mao, Mark Rudelson, Konstantin Tikhomirov
Let $G$ and $G'$ be $G(n, p)$ Erd\H{o}s--R\'enyi graphs marginally, identified with their adjacency matrices.
no code implementations • 28 Jan 2021 • Cheng Mao, Mark Rudelson, Konstantin Tikhomirov
Graph matching, also known as network alignment, refers to finding a bijection between the vertex sets of two given graphs so as to maximally align their edges.
no code implementations • 11 Apr 2019 • Shiva Prasad Kasiviswanathan, Mark Rudelson
Matrices satisfying the Restricted Isometry Property (RIP) play an important role in the areas of compressed sensing and statistical learning.
no code implementations • 25 Jul 2017 • Shiva Prasad Kasiviswanathan, Mark Rudelson
This construction allows incorporating a fixed matrix that has an easily {\em verifiable} condition into the design process, and allows for generation of {\em compressed} design matrices that have a lower storage requirement than a standard design matrix.
no code implementations • 15 Nov 2016 • Mark Rudelson, Shuheng Zhou
Under sparsity and restrictive eigenvalue type of conditions, we show that one is able to recover a sparse vector $\beta^* \in \mathbb{R}^m$ from the model given a single observation matrix $X$ and the response vector $y$.
no code implementations • 22 Apr 2015 • Shiva Prasad Kasiviswanathan, Mark Rudelson
In this paper, we initiate the study of non-asymptotic spectral theory of random kernel matrices.
no code implementations • 9 Feb 2015 • Mark Rudelson, Shuheng Zhou
Suppose that we observe $y \in \mathbb{R}^f$ and $X \in \mathbb{R}^{f \times m}$ in the following errors-in-variables model: \begin{eqnarray*} y & = & X_0 \beta^* + \epsilon \\ X & = & X_0 + W \end{eqnarray*} where $X_0$ is a $f \times m$ design matrix with independent subgaussian row vectors, $\epsilon \in \mathbb{R}^f$ is a noise vector and $W$ is a mean zero $f \times m$ random noise matrix with independent subgaussian column vectors, independent of $X_0$ and $\epsilon$.