no code implementations • 8 Jan 2024 • T. Tony Cai, Dong Xia, Mengyue Zha
Estimating a covariance matrix and its associated principal components is a fundamental problem in contemporary statistics.
no code implementations • 1 Dec 2023 • Wanteng Ma, Lilun Du, Dong Xia, Ming Yuan
Many important tasks of large-scale recommender systems can be naturally cast as testing multiple linear forms for noisy matrix completion.
no code implementations • 27 Nov 2023 • Zhongyuan Lyu, Ting Li, Dong Xia
Under the mixture multi-layer stochastic block model (MMSBM), we show that the minimax optimal network clustering error rate, which takes an exponential form and is characterized by the Renyi divergence between the edge probability distributions of the component networks.
no code implementations • 2 Nov 2023 • Wanteng Ma, Dong Xia, Jiashuo Jiang
We study the contextual bandits with knapsack (CBwK) problem under the high-dimensional setting where the dimension of the feature is large.
no code implementations • 6 Jun 2023 • Jian-Feng Cai, Jingyang Li, Dong Xia
Under the fixed step size regime, a fascinating trilemma concerning the convergence rate, statistical error rate, and regret is observed.
no code implementations • 10 May 2023 • Yinan Shen, Jingyang Li, Jian-Feng Cai, Dong Xia
The algorithm is not only computationally efficient with linear convergence but also statistically optimal, be the noise Gaussian or heavy-tailed with a finite 1 + epsilon moment.
1 code implementation • 9 Feb 2023 • Ting Li, Zhongyuan Lyu, Chenyu Ren, Dong Xia
This paper develops an R package rMultiNet to analyze multilayer network data.
no code implementations • 28 Sep 2022 • Tianxi Cai, Dong Xia, Luwan Zhang, Doudou Zhou
Network analysis has been a powerful tool to unveil relationships and interactions among a large number of objects.
no code implementations • 1 Sep 2022 • Wanteng Ma, Ying Cao, Danny H. K. Tsang, Dong Xia
This paper introduces a dual-based algorithm framework for solving the regularized online resource allocation problems, which have potentially non-concave cumulative rewards, hard resource constraints, and a non-separable regularizer.
1 code implementation • 16 Aug 2022 • Meijia Shao, Dong Xia, Yuan Zhang, Qiong Wu, Shuo Chen
Two-sample hypothesis testing for network comparison presents many significant challenges, including: leveraging repeated network observations and known node registration, but without requiring them to operate; relaxing strong structural assumptions; achieving finite-sample higher-order accuracy; handling different network sizes and sparsity levels; fast computation and memory parsimony; controlling false discovery rate (FDR) in multiple testing; and theoretical understandings, particularly regarding finite-sample accuracy and minimax optimality.
no code implementations • 11 Jul 2022 • Zhongyuan Lyu, Dong Xia
Comparable to GMM, the minimax optimal clustering error rate is decided by the separation strength, i. e., the minimal distance between population center matrices.
no code implementations • 2 Mar 2022 • Yinan Shen, Jingyang Li, Jian-Feng Cai, Dong Xia
Lastly, RsGrad is applicable for low-rank tensor estimation under heavy-tailed noise where a statistically optimal rate is attainable with the same phenomenon of dual-phase convergence, and a novel shrinkage-based second-order moment method is guaranteed to deliver a warm initialization.
no code implementations • 22 Jan 2022 • Zhongyuan Lyu, Dong Xia
If the signal is stronger than a certain threshold, called the computational limit, we design a computationally fast estimator based on spectral aggregation and demonstrate its minimax optimality.
no code implementations • 27 Aug 2021 • Jian-Feng Cai, Jingyang Li, Dong Xia
In this paper, we provide, to our best knowledge, the first theoretical guarantees of the convergence of RGrad algorithm for TT-format tensor completion, under a nearly optimal sample size condition.
no code implementations • 30 Jun 2021 • Zhongyuan Lyu, Dong Xia, Yuan Zhang
We formulate the relationship between the latent positions and the observed data via a generalized multilinear kernel as the link function.
no code implementations • 29 Dec 2020 • Dong Xia, Anru R. Zhang, Yuchen Zhou
In all these models, we observe that different from many matrix/vector settings in existing work, debiasing is not required to establish the asymptotic distribution of estimates or to make statistical inference on low-rank tensors.
1 code implementation • 14 Apr 2020 • Yuan Zhang, Dong Xia
In this paper, we present the first higher-order accurate approximation to the sampling CDF of a studentized network moment by Edgeworth expansion.
no code implementations • 10 Feb 2020 • Bing-Yi Jing, Ting Li, Zhongyuan Lyu, Dong Xia
We show that the TWIST procedure can accurately detect the communities with small misclassification error as the number of nodes and/or the number of layers increases.
no code implementations • 31 Aug 2019 • Dong Xia, Ming Yuan
We introduce a flexible framework for making inferences about general linear forms of a large matrix based on noisy observations of a subset of its entries.
no code implementations • 2 Jan 2019 • Dong Xia
Our contributions are three-fold.
no code implementations • 24 Aug 2018 • Dong Xia
This note displays an interesting phenomenon for percentiles of independent but non-identical random variables.
no code implementations • 24 May 2018 • Dong Xia
We investigate the distribution of the joint projection distance between the empirical singular subspace and the unknown true singular subspace.
no code implementations • 14 Nov 2017 • Dong Xia, Ming Yuan, Cun-Hui Zhang
To fill in this void, in this article, we characterize the fundamental statistical limits of noisy tensor completion by establishing minimax optimal rates of convergence for estimating a $k$th order low rank tensor under the general $\ell_p$ ($1\le p\le 2$) norm which suggest significant room for improvement over the existing approaches.
no code implementations • 31 Oct 2017 • Dong Xia, Ming Yuan
In particular, we show that for a $k$th order $d\times\cdots\times d$ cubic tensor of {\it stable rank} $r_s$, the sample size requirement for achieving a relative error $\varepsilon$ is, up to a logarithmic factor, of the order $r_s^{1/2} d^{k/2} /\varepsilon$ when $\varepsilon$ is relatively large, and $r_s d /\varepsilon^2$ and essentially optimal when $\varepsilon$ is sufficiently small.
no code implementations • 5 Jul 2017 • Dong Xia, Fan Zhou
In addition, the bounds established for HOSVD also elaborate the one-sided sup-norm perturbation bounds for the singular subspaces of unbalanced (or fat) matrices.
no code implementations • 8 Mar 2017 • Anru Zhang, Dong Xia
In this paper, we propose a general framework for tensor singular value decomposition (tensor SVD), which focuses on the methodology and theory for extracting the hidden low-rank structure from high-dimensional tensor data.
no code implementations • 22 Feb 2017 • Dong Xia, Ming Yuan
In this paper, we investigate the sample size requirement for exact recovery of a high order tensor of low rank from a subset of its entries.
no code implementations • 16 Oct 2016 • Dong Xia
First, we establish the minimax lower bounds in Schatten $p$-norms with $1\leq p\leq +\infty$ for low rank density matrices estimation by Pauli measurements.
no code implementations • 15 Apr 2016 • Dong Xia, Vladimir Koltchinskii
Let ${\mathcal S}_m$ be the set of all $m\times m$ density matrices (Hermitian positively semi-definite matrices of unit trace).
no code implementations • 17 Jul 2015 • Vladimir Koltchinskii, Dong Xia
The density matrices are positively semi-definite Hermitian matrices of unit trace that describe the state of a quantum system.
no code implementations • 26 Dec 2014 • Dong Xia
Recent studies in the literature have paid much attention to the sparsity in linear classification tasks.
no code implementations • 25 Mar 2014 • Dong Xia
We also give upper bounds and matching minimax lower bound(except some logarithmic terms) for estimation accuracy under Schatten-q norm for every $1\leq q\leq\infty$.