Search Results for author: Mark Rudelson

Found 8 papers, 0 papers with code

Optimal Embedding Dimension for Sparse Subspace Embeddings

no code implementations17 Nov 2023 Shabarish Chenakkod, Michał Dereziński, Xiaoyu Dong, Mark Rudelson

We use this to construct the first oblivious subspace embedding with $O(d)$ embedding dimension that can be applied faster than current matrix multiplication time, and to obtain an optimal single-pass algorithm for least squares regression.

Exact Matching of Random Graphs with Constant Correlation

no code implementations11 Oct 2021 Cheng Mao, Mark Rudelson, Konstantin Tikhomirov

Let $G$ and $G'$ be $G(n, p)$ Erd\H{o}s--R\'enyi graphs marginally, identified with their adjacency matrices.

Graph Matching

Random Graph Matching with Improved Noise Robustness

no code implementations28 Jan 2021 Cheng Mao, Mark Rudelson, Konstantin Tikhomirov

Graph matching, also known as network alignment, refers to finding a bijection between the vertex sets of two given graphs so as to maximally align their edges.

Graph Matching

Restricted Isometry Property under High Correlations

no code implementations11 Apr 2019 Shiva Prasad Kasiviswanathan, Mark Rudelson

Matrices satisfying the Restricted Isometry Property (RIP) play an important role in the areas of compressed sensing and statistical learning.

Dimensionality Reduction Vocal Bursts Intensity Prediction

Restricted Eigenvalue from Stable Rank with Applications to Sparse Linear Regression

no code implementations25 Jul 2017 Shiva Prasad Kasiviswanathan, Mark Rudelson

This construction allows incorporating a fixed matrix that has an easily {\em verifiable} condition into the design process, and allows for generation of {\em compressed} design matrices that have a lower storage requirement than a standard design matrix.

regression

Errors-in-variables models with dependent measurements

no code implementations15 Nov 2016 Mark Rudelson, Shuheng Zhou

Under sparsity and restrictive eigenvalue type of conditions, we show that one is able to recover a sparse vector $\beta^* \in \mathbb{R}^m$ from the model given a single observation matrix $X$ and the response vector $y$.

Spectral Norm of Random Kernel Matrices with Applications to Privacy

no code implementations22 Apr 2015 Shiva Prasad Kasiviswanathan, Mark Rudelson

In this paper, we initiate the study of non-asymptotic spectral theory of random kernel matrices.

Attribute regression

High dimensional errors-in-variables models with dependent measurements

no code implementations9 Feb 2015 Mark Rudelson, Shuheng Zhou

Suppose that we observe $y \in \mathbb{R}^f$ and $X \in \mathbb{R}^{f \times m}$ in the following errors-in-variables model: \begin{eqnarray*} y & = & X_0 \beta^* + \epsilon \\ X & = & X_0 + W \end{eqnarray*} where $X_0$ is a $f \times m$ design matrix with independent subgaussian row vectors, $\epsilon \in \mathbb{R}^f$ is a noise vector and $W$ is a mean zero $f \times m$ random noise matrix with independent subgaussian column vectors, independent of $X_0$ and $\epsilon$.

Vocal Bursts Intensity Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.