Search Results for author: Ming Yuan

Found 24 papers, 1 papers with code

On Recovering the Best Rank-r Approximation from Few Entries

no code implementations11 Nov 2021 Shun Xu, Ming Yuan

In this note, we investigate how well we can reconstruct the best rank-$r$ approximation of a large matrix from a small number of its entries.

On Estimating Rank-One Spiked Tensors in the Presence of Heavy Tailed Errors

no code implementations20 Jul 2021 Arnab Auddy, Ming Yuan

In this paper, we study the estimation of a rank-one spiked tensor in the presence of heavy tailed noise.

Detecting Structured Signals in Ising Models

no code implementations10 Dec 2020 Nabarun Deb, Rajarshi Mukherjee, Sumit Mukherjee, Ming Yuan

In this paper, we study the effect of dependence on detecting a class of signals in Ising models, where the signals are present in a structured way.

Probability Statistics Theory Statistics Theory 62G10, 62G20, 62C20

A Sharp Blockwise Tensor Perturbation Bound for Orthogonal Iteration

no code implementations6 Aug 2020 Yuetian Luo, Garvesh Raskutti, Ming Yuan, Anru R. Zhang

Rate matching deterministic lower bound for tensor reconstruction, which demonstrates the optimality of HOOI, is also provided.


Perturbation Bounds for (Nearly) Orthogonally Decomposable Tensors

no code implementations17 Jul 2020 Arnab Auddy, Ming Yuan

We develop deterministic perturbation bounds for singular values and vectors of orthogonally decomposable tensors, in a spirit similar to classical results for matrices such as those due to Weyl, Davis, Kahan and Wedin.

ISLET: Fast and Optimal Low-rank Tensor Regression via Importance Sketching

no code implementations9 Nov 2019 Anru Zhang, Yuetian Luo, Garvesh Raskutti, Ming Yuan

In this paper, we develop a novel procedure for low-rank tensor regression, namely \emph{\underline{I}mportance \underline{S}ketching \underline{L}ow-rank \underline{E}stimation for \underline{T}ensors} (ISLET).

Distributed Computing

On the Optimality of Gaussian Kernel Based Nonparametric Tests against Smooth Alternatives

no code implementations7 Sep 2019 Tong Li, Ming Yuan

In addition, our analysis also pinpoints the importance of choosing a diverging scaling parameter when using Gaussian kernels and suggests a data-driven choice of the scaling parameter that yields tests optimal, up to an iterated logarithmic factor, over a wide range of smooth alternatives.

Statistical Inferences of Linear Forms for Noisy Matrix Completion

no code implementations31 Aug 2019 Dong Xia, Ming Yuan

We introduce a flexible framework for making inferences about general linear forms of a large matrix based on noisy observations of a subset of its entries.

Matrix Completion

Joint Demosaicing and Denoising with Perceptual Optimization on a Generative Adversarial Network

no code implementations13 Feb 2018 Weishong Dong, Ming Yuan, Xin Li, Guangming Shi

Image demosaicing - one of the most important early stages in digital camera pipelines - addressed the problem of reconstructing a full-resolution image from so-called color-filter-arrays.

Demosaicking Denoising +1

Finding Differentially Covarying Needles in a Temporally Evolving Haystack: A Scan Statistics Perspective

no code implementations20 Nov 2017 Ronak Mehta, Hyunwoo J. Kim, Shulei Wang, Sterling C. Johnson, Ming Yuan, Vikas Singh

Recent results in coupled or temporal graphical models offer schemes for estimating the relationship structure between features when the data come from related (but distinct) longitudinal sources.

Statistically Optimal and Computationally Efficient Low Rank Tensor Completion from Noisy Entries

no code implementations14 Nov 2017 Dong Xia, Ming Yuan, Cun-Hui Zhang

To fill in this void, in this article, we characterize the fundamental statistical limits of noisy tensor completion by establishing minimax optimal rates of convergence for estimating a $k$th order low rank tensor under the general $\ell_p$ ($1\le p\le 2$) norm which suggest significant room for improvement over the existing approaches.

Effective Tensor Sketching via Sparsification

no code implementations31 Oct 2017 Dong Xia, Ming Yuan

In particular, we show that for a $k$th order $d\times\cdots\times d$ cubic tensor of {\it stable rank} $r_s$, the sample size requirement for achieving a relative error $\varepsilon$ is, up to a logarithmic factor, of the order $r_s^{1/2} d^{k/2} /\varepsilon$ when $\varepsilon$ is relatively large, and $r_s d /\varepsilon^2$ and essentially optimal when $\varepsilon$ is sufficiently small.

On the Optimality of Kernel-Embedding Based Goodness-of-Fit Tests

no code implementations24 Sep 2017 Krishnakumar Balasubramanian, Tong Li, Ming Yuan

The reproducing kernel Hilbert space (RKHS) embedding of distributions offers a general and flexible framework for testing problems in arbitrary domains and has attracted considerable amount of attention in recent years.

Characterizing Spatiotemporal Transcriptome of Human Brain via Low Rank Tensor Decomposition

1 code implementation24 Feb 2017 Tianqi Liu, Ming Yuan, Hongyu Zhao

An application of our method to a spatiotemporal brain expression data provides insights on gene regulation patterns in the brain.


On Polynomial Time Methods for Exact Low Rank Tensor Completion

no code implementations22 Feb 2017 Dong Xia, Ming Yuan

In this paper, we investigate the sample size requirement for exact recovery of a high order tensor of low rank from a subset of its entries.

Non-Convex Projected Gradient Descent for Generalized Low-Rank Tensor Regression

no code implementations30 Nov 2016 Han Chen, Garvesh Raskutti, Ming Yuan

The two main differences between the convex and non-convex approach are: (i) from a computational perspective whether the non-convex projection operator is computable and whether the projection has desirable contraction properties and (ii) from a statistical upper bound perspective, the non-convex approach has a superior rate for a number of examples.

Incoherent Tensor Norms and Their Applications in Higher Order Tensor Completion

no code implementations10 Jun 2016 Ming Yuan, Cun-Hui Zhang

In this paper, we investigate the sample size requirement for a general class of nuclear norm minimization methods for higher order tensor completion.

Human Memory Search as Initial-Visit Emitting Random Walk

no code implementations NeurIPS 2015 Kwang-Sung Jun, Jerry Zhu, Timothy T. Rogers, Zhuoran Yang, Ming Yuan

In this paper, we propose the first efficient maximum likelihood estimate (MLE) for INVITE by decomposing the censored output into a series of absorbing random walks.

Minimax Optimal Rates of Estimation in High Dimensional Additive Models: Universal Phase Transition

no code implementations10 Mar 2015 Ming Yuan, Ding-Xuan Zhou

We establish minimax optimal rates of convergence for estimation in a high dimensional additive model assuming that it is approximately sparse.

Additive models

Distance Shrinkage and Euclidean Embedding via Regularized Kernel Estimation

no code implementations17 Sep 2014 Luwan Zhang, Grace Wahba, Ming Yuan

Although recovering an Euclidean distance matrix from noisy observations is a common problem in practice, how well this could be done remains largely unknown.

Rate-Optimal Detection of Very Short Signal Segments

no code implementations10 Jul 2014 T. Tony Cai, Ming Yuan

Motivated by a range of applications in engineering and genomics, we consider in this paper detection of very short signal segments in three settings: signals with known shape, arbitrary signals, and smooth signals.

On Tensor Completion via Nuclear Norm Minimization

no code implementations7 May 2014 Ming Yuan, Cun-Hui Zhang

To establish our results, we develop a series of algebraic and probabilistic techniques such as characterization of subdifferetial for tensor nuclear norm and concentration inequalities for tensor martingales, which may be of independent interests and could be useful in other tensor related problems.

Matrix Completion

Learning Networks of Heterogeneous Influence

no code implementations NeurIPS 2012 Nan Du, Le Song, Ming Yuan, Alex J. Smola

However, the underlying transmission networks are often hidden and incomplete, and we observe only the time stamps when cascades of events happen.

Cannot find the paper you are looking for? You can Submit a new open access paper.