Search Results for author: Matan Gavish

Found 7 papers, 3 papers with code

LOCA: LOcal Conformal Autoencoder for standardized data coordinates

no code implementations15 Apr 2020 Erez Peterfreund, Ofir Lindenbaum, Felix Dietrich, Tom Bertalan, Matan Gavish, Ioannis G. Kevrekidis, Ronald R. Coifman

We propose a deep-learning based method for obtaining standardized data coordinates from scientific measurements. Data observations are modeled as samples from an unknown, non-linear deformation of an underlying Riemannian manifold, which is parametrized by a few normalized latent variables.

Optimal Shrinkage of Singular Values Under Random Data Contamination

no code implementations NeurIPS 2017 Danny Barash, Matan Gavish

A low rank matrix X has been contaminated by uniformly distributed noise, missing values, outliers and corrupt entries.

ReFACTor: Practical Low-Rank Matrix Estimation Under Column-Sparsity

no code implementations22 May 2017 Matan Gavish, Regev Schweiger, Elior Rahmani, Eran Halperin

Various problems in data analysis and statistical genetics call for recovery of a column-sparse, low-rank matrix from noisy observations.

Optimal Shrinkage of Singular Values

1 code implementation29 May 2014 Matan Gavish, David L. Donoho

For a variety of loss functions, including Mean Square Error (MSE - square Frobenius norm), the nuclear norm loss and the operator norm loss, we show that in this framework there is a well-defined asymptotic loss that we evaluate precisely in each case.

Statistics Theory Statistics Theory

The Maximum Entropy Relaxation Path

no code implementations7 Nov 2013 Moshe Dubiner, Matan Gavish, Yoram Singer

We show existence and a geometric description of the relaxation path.

The Optimal Hard Threshold for Singular Values is 4/sqrt(3)

3 code implementations24 May 2013 Matan Gavish, David L. Donoho

In our asymptotic framework, this thresholding rule adapts to unknown rank and to unknown noise level in an optimal manner: it is always better than hard thresholding at any other value, no matter what the matrix is that we are trying to recover, and is always better than ideal Truncated SVD (TSVD), which truncates at the true rank of the low-rank matrix we are trying to recover.

Methodology

Cannot find the paper you are looking for? You can Submit a new open access paper.