no code implementations • 8 Feb 2024 • Jungjun Choi, Ming Yuan
This paper studies the principal components (PC) estimator for high dimensional approximate factor models with weak factors in that the factor loading ($\boldsymbol{\Lambda}^0$) scales sublinearly in the number $N$ of cross-section units, i. e., $\boldsymbol{\Lambda}^{0\top} \boldsymbol{\Lambda}^0 / N^\alpha$ is positive definite in the limit for some $\alpha \in (0, 1)$.
no code implementations • 1 Dec 2023 • Wanteng Ma, Lilun Du, Dong Xia, Ming Yuan
Many important tasks of large-scale recommender systems can be naturally cast as testing multiple linear forms for noisy matrix completion.
no code implementations • 2 Jul 2023 • Runshi Tang, Ming Yuan, Anru R. Zhang
The MOP-UP algorithm consists of two steps: Average Subspace Capture (ASC) and Alternating Projection (AP).
no code implementations • 31 Mar 2023 • Arnab Auddy, Ming Yuan
Our method is fairly easy to implement and numerical experiments are presented to further demonstrate its practical merits.
no code implementations • 11 Nov 2021 • Shun Xu, Ming Yuan
In this note, we investigate how well we can reconstruct the best rank-$r$ approximation of a large matrix from a small number of its entries.
no code implementations • 20 Jul 2021 • Arnab Auddy, Ming Yuan
In this paper, we study the estimation of a rank-one spiked tensor in the presence of heavy tailed noise.
no code implementations • 7 Feb 2021 • Ming Yuan, Vikas Kumar, Muhammad Aurangzeb Ahmad, Ankur Teredesai
Fairness in AI and machine learning systems has become a fundamental problem in the accountability of AI systems.
no code implementations • 10 Dec 2020 • Nabarun Deb, Rajarshi Mukherjee, Sumit Mukherjee, Ming Yuan
In this paper, we study the effect of dependence on detecting a class of signals in Ising models, where the signals are present in a structured way.
Probability Statistics Theory Statistics Theory 62G10, 62G20, 62C20
no code implementations • 6 Aug 2020 • Yuetian Luo, Garvesh Raskutti, Ming Yuan, Anru R. Zhang
Rate matching deterministic lower bound for tensor reconstruction, which demonstrates the optimality of HOOI, is also provided.
no code implementations • 17 Jul 2020 • Arnab Auddy, Ming Yuan
We develop deterministic perturbation bounds for singular values and vectors of orthogonally decomposable tensors, in a spirit similar to classical results for matrices such as those due to Weyl, Davis, Kahan and Wedin.
no code implementations • 9 Nov 2019 • Anru Zhang, Yuetian Luo, Garvesh Raskutti, Ming Yuan
In this paper, we develop a novel procedure for low-rank tensor regression, namely \emph{\underline{I}mportance \underline{S}ketching \underline{L}ow-rank \underline{E}stimation for \underline{T}ensors} (ISLET).
no code implementations • 7 Sep 2019 • Tong Li, Ming Yuan
In addition, our analysis also pinpoints the importance of choosing a diverging scaling parameter when using Gaussian kernels and suggests a data-driven choice of the scaling parameter that yields tests optimal, up to an iterated logarithmic factor, over a wide range of smooth alternatives.
no code implementations • 31 Aug 2019 • Dong Xia, Ming Yuan
We introduce a flexible framework for making inferences about general linear forms of a large matrix based on noisy observations of a subset of its entries.
no code implementations • 13 Feb 2018 • Weishong Dong, Ming Yuan, Xin Li, Guangming Shi
Image demosaicing - one of the most important early stages in digital camera pipelines - addressed the problem of reconstructing a full-resolution image from so-called color-filter-arrays.
no code implementations • 20 Nov 2017 • Ronak Mehta, Hyunwoo J. Kim, Shulei Wang, Sterling C. Johnson, Ming Yuan, Vikas Singh
Recent results in coupled or temporal graphical models offer schemes for estimating the relationship structure between features when the data come from related (but distinct) longitudinal sources.
no code implementations • 14 Nov 2017 • Dong Xia, Ming Yuan, Cun-Hui Zhang
To fill in this void, in this article, we characterize the fundamental statistical limits of noisy tensor completion by establishing minimax optimal rates of convergence for estimating a $k$th order low rank tensor under the general $\ell_p$ ($1\le p\le 2$) norm which suggest significant room for improvement over the existing approaches.
no code implementations • 31 Oct 2017 • Dong Xia, Ming Yuan
In particular, we show that for a $k$th order $d\times\cdots\times d$ cubic tensor of {\it stable rank} $r_s$, the sample size requirement for achieving a relative error $\varepsilon$ is, up to a logarithmic factor, of the order $r_s^{1/2} d^{k/2} /\varepsilon$ when $\varepsilon$ is relatively large, and $r_s d /\varepsilon^2$ and essentially optimal when $\varepsilon$ is sufficiently small.
no code implementations • 24 Sep 2017 • Krishnakumar Balasubramanian, Tong Li, Ming Yuan
The reproducing kernel Hilbert space (RKHS) embedding of distributions offers a general and flexible framework for testing problems in arbitrary domains and has attracted considerable amount of attention in recent years.
1 code implementation • 24 Feb 2017 • Tianqi Liu, Ming Yuan, Hongyu Zhao
An application of our method to a spatiotemporal brain expression data provides insights on gene regulation patterns in the brain.
Methodology
no code implementations • 22 Feb 2017 • Dong Xia, Ming Yuan
In this paper, we investigate the sample size requirement for exact recovery of a high order tensor of low rank from a subset of its entries.
no code implementations • 30 Nov 2016 • Han Chen, Garvesh Raskutti, Ming Yuan
The two main differences between the convex and non-convex approach are: (i) from a computational perspective whether the non-convex projection operator is computable and whether the projection has desirable contraction properties and (ii) from a statistical upper bound perspective, the non-convex approach has a superior rate for a number of examples.
no code implementations • 10 Jun 2016 • Ming Yuan, Cun-Hui Zhang
In this paper, we investigate the sample size requirement for a general class of nuclear norm minimization methods for higher order tensor completion.
no code implementations • NeurIPS 2015 • Kwang-Sung Jun, Jerry Zhu, Timothy T. Rogers, Zhuoran Yang, Ming Yuan
In this paper, we propose the first efficient maximum likelihood estimate (MLE) for INVITE by decomposing the censored output into a series of absorbing random walks.
no code implementations • 10 Mar 2015 • Ming Yuan, Ding-Xuan Zhou
We establish minimax optimal rates of convergence for estimation in a high dimensional additive model assuming that it is approximately sparse.
no code implementations • 17 Sep 2014 • Luwan Zhang, Grace Wahba, Ming Yuan
Although recovering an Euclidean distance matrix from noisy observations is a common problem in practice, how well this could be done remains largely unknown.
no code implementations • 10 Jul 2014 • T. Tony Cai, Ming Yuan
Motivated by a range of applications in engineering and genomics, we consider in this paper detection of very short signal segments in three settings: signals with known shape, arbitrary signals, and smooth signals.
no code implementations • 7 May 2014 • Ming Yuan, Cun-Hui Zhang
To establish our results, we develop a series of algebraic and probabilistic techniques such as characterization of subdifferetial for tensor nuclear norm and concentration inequalities for tensor martingales, which may be of independent interests and could be useful in other tensor related problems.
no code implementations • NeurIPS 2012 • Nan Du, Le Song, Ming Yuan, Alex J. Smola
However, the underlying transmission networks are often hidden and incomplete, and we observe only the time stamps when cascades of events happen.