Search Results for author: Cun-Hui Zhang

Found 22 papers, 4 papers with code

Uniform Inference for Nonlinear Endogenous Treatment Effects with High-Dimensional Covariates

1 code implementation12 Oct 2023 Qingliang Fan, Zijian Guo, Ziwei Mei, Cun-Hui Zhang

In this paper, we propose new estimation and inference procedures for nonparametric treatment effect functions with endogeneity and potentially high-dimensional covariates.

Statistical Limits of Adaptive Linear Models: Low-Dimensional Estimation and Inference

1 code implementation NeurIPS 2023 Licong Lin, Mufang Ying, Suvrojit Ghosh, Koulik Khamaru, Cun-Hui Zhang

Even in linear models, the Ordinary Least Squares (OLS) estimator may fail to exhibit asymptotic normality for single coordinate estimation and have inflated error.

Adaptive Linear Estimating Equations

1 code implementation NeurIPS 2023 Mufang Ying, Koulik Khamaru, Cun-Hui Zhang

Sequential data collection has emerged as a widely adopted technique for enhancing the efficiency of data gathering processes.

Multi-Armed Bandits

Rate-Optimal Subspace Estimation on Random Graphs

no code implementations NeurIPS 2021 Zhixin Zhou, Fan Zhou, Ping Li, Cun-Hui Zhang

We show that the performance of estimating the connectivity matrix $M$ depends on the sparsity of the graph.

Tensor Principal Component Analysis in High Dimensional CP Models

no code implementations10 Aug 2021 Yuefeng Han, Cun-Hui Zhang

It is designed to improve the alternating least squares estimator and other forms of the high order orthogonal iteration for tensors with low or moderately high CP ranks, and it is guaranteed to converge rapidly when the error of any given initial estimator is bounded by a small constant.

2k Vocal Bursts Intensity Prediction

Asymptotic normality of robust $M$-estimators with convex penalty

no code implementations8 Jul 2021 Pierre C Bellec, Yiwei Shen, Cun-Hui Zhang

This paper develops asymptotic normality results for individual coordinates of robust M-estimators with convex penalty in high-dimensions, where the dimension $p$ is at most of the same order as the sample size $n$, i. e, $p/n\le\gamma$ for some fixed constant $\gamma>0$.

De-Biasing The Lasso With Degrees-of-Freedom Adjustment

no code implementations24 Feb 2019 Pierre C. Bellec, Cun-Hui Zhang

This modification takes the form of a degrees-of-freedom adjustment that accounts for the dimension of the model selected by Lasso.

LEMMA

Statistically Optimal and Computationally Efficient Low Rank Tensor Completion from Noisy Entries

no code implementations14 Nov 2017 Dong Xia, Ming Yuan, Cun-Hui Zhang

To fill in this void, in this article, we characterize the fundamental statistical limits of noisy tensor completion by establishing minimax optimal rates of convergence for estimating a $k$th order low rank tensor under the general $\ell_p$ ($1\le p\le 2$) norm which suggest significant room for improvement over the existing approaches.

Theory of the GMM Kernel

no code implementations1 Aug 2016 Ping Li, Cun-Hui Zhang

We prove the theoretical limit of GMM and the consistency result, assuming that the data follow an elliptical distribution, which is a very general family of distributions and includes the multivariate $t$-distribution as a special case.

BIG-bench Machine Learning

Incoherent Tensor Norms and Their Applications in Higher Order Tensor Completion

no code implementations10 Jun 2016 Ming Yuan, Cun-Hui Zhang

In this paper, we investigate the sample size requirement for a general class of nuclear norm minimization methods for higher order tensor completion.

Compressed Sensing with Very Sparse Gaussian Random Projections

no code implementations11 Aug 2014 Ping Li, Cun-Hui Zhang

We have developed two estimators: (i) the {\em tie estimator}, and (ii) the {\em absolute minimum estimator}.

On Tensor Completion via Nuclear Norm Minimization

no code implementations7 May 2014 Ming Yuan, Cun-Hui Zhang

To establish our results, we develop a series of algebraic and probabilistic techniques such as characterization of subdifferetial for tensor nuclear norm and concentration inequalities for tensor martingales, which may be of independent interests and could be useful in other tensor related problems.

Matrix Completion

Sparse Recovery with Very Sparse Compressed Counting

no code implementations31 Dec 2013 Ping Li, Cun-Hui Zhang, Tong Zhang

In this paper, we adopt very sparse Compressed Counting for nonnegative signal recovery.

Compressed Counting Meets Compressed Sensing

no code implementations3 Oct 2013 Ping Li, Cun-Hui Zhang, Tong Zhang

In particular, when p->0 the required number of measurements is essentially M=K\log N, where K is the number of nonzero coordinates of the signal.

Asymptotic normality and optimalities in estimation of large Gaussian graphical models

no code implementations24 Sep 2013 Zhao Ren, Tingni Sun, Cun-Hui Zhang, Harrison H. Zhou

This paper considers a fundamental question: When is it possible to estimate low-dimensional parameters at parametric square-root rate in a large Gaussian graphical model?

LEMMA

Calibrated Elastic Regularization in Matrix Completion

no code implementations NeurIPS 2012 Tingni Sun, Cun-Hui Zhang

This paper concerns the problem of matrix completion, which is to estimate a matrix from observations in a small subset of indices.

Matrix Completion

One Permutation Hashing

no code implementations NeurIPS 2012 Ping Li, Art Owen, Cun-Hui Zhang

While minwise hashing is promising for large-scale learning in massive binary data, the preprocessing cost is prohibitive as it requires applying (e. g.,) $k=500$ permutations on the data.

Entropy Estimations Using Correlated Symmetric Stable Random Projections

no code implementations NeurIPS 2012 Ping Li, Cun-Hui Zhang

Methods for efficiently estimating the Shannon entropy of data streams have important applications in learning, data mining, and network anomaly detections (e. g., the DDoS attacks).

Optimality of Graphlet Screening in High Dimensional Variable Selection

no code implementations29 Apr 2012 Jiashun Jin, Cun-Hui Zhang, Qi Zhang

Compared to m-variate brute-forth screening that has a computational cost of p^m, the GS only has a computational cost of p (up to some multi-log(p) factors) in screening.

Variable Selection Vocal Bursts Intensity Prediction

Sparse Matrix Inversion with Scaled Lasso

no code implementations13 Feb 2012 Tingni Sun, Cun-Hui Zhang

The penalty level of the scaled Lasso for each column is completely determined by data via convex minimization, without using cross-validation.

Nearly unbiased variable selection under minimax concave penalty

2 code implementations25 Feb 2010 Cun-Hui Zhang

We propose MC+, a fast, continuous, nearly unbiased and accurate method of penalized variable selection in high-dimensional linear regression.

Statistics Theory Statistics Theory 62J05, 62J07 (Primary) 62H12, 62H25 (Secondary)

Cannot find the paper you are looking for? You can Submit a new open access paper.