Search Results for author: Yuling Yan

Found 13 papers, 0 papers with code

The Isotonic Mechanism for Exponential Family Estimation

no code implementations21 Apr 2023 Yuling Yan, Weijie J. Su, Jianqing Fan

Lastly, we show that the adjusted scores improve dramatically the accuracy of the original scores and achieve nearly minimax optimality for estimating the true scores with statistical consistecy when true scores have bounded total variation.

Minimax-Optimal Reward-Agnostic Exploration in Reinforcement Learning

no code implementations14 Apr 2023 Gen Li, Yuling Yan, Yuxin Chen, Jianqing Fan

This paper studies reward-agnostic exploration in reinforcement learning (RL) -- a scenario where the learner is unware of the reward functions during the exploration stage -- and designs an algorithm that improves over the state of the art.

Offline RL reinforcement-learning +1

Learning Gaussian Mixtures Using the Wasserstein-Fisher-Rao Gradient Flow

no code implementations4 Jan 2023 Yuling Yan, Kaizheng Wang, Philippe Rigollet

Gaussian mixture models form a flexible and expressive parametric family of distributions that has found applications in a wide variety of applications.

The Efficacy of Pessimism in Asynchronous Q-Learning

no code implementations14 Mar 2022 Yuling Yan, Gen Li, Yuxin Chen, Jianqing Fan

This paper is concerned with the asynchronous form of Q-learning, which applies a stochastic approximation scheme to Markovian data samples.


Inference for Heteroskedastic PCA with Missing Data

no code implementations26 Jul 2021 Yuling Yan, Yuxin Chen, Jianqing Fan

Particularly worth highlighting is the inference procedure built on top of $\textsf{HeteroPCA}$, which is not only valid but also statistically efficient for broader scenarios (e. g., it covers a wider range of missing rates and signal-to-noise ratios).

Sample-Efficient Reinforcement Learning for Linearly-Parameterized MDPs with a Generative Model

no code implementations NeurIPS 2021 Bingyan Wang, Yuling Yan, Jianqing Fan

Our results show that for arbitrarily large-scale MDP, both the model-based approach and Q-learning are sample-efficient when $K$ is relatively small, and hence the title of this paper.

Q-Learning reinforcement-learning +1

Convex and Nonconvex Optimization Are Both Minimax-Optimal for Noisy Blind Deconvolution under Random Designs

no code implementations4 Aug 2020 Yuxin Chen, Jianqing Fan, Bingyan Wang, Yuling Yan

We investigate the effectiveness of convex relaxation and nonconvex optimization in solving bilinear systems of equations under two different designs (i. e.$~$a sort of random Fourier design and Gaussian design).

Efficient Clustering for Stretched Mixtures: Landscape and Optimality

no code implementations NeurIPS 2020 Kaizheng Wang, Yuling Yan, Mateo Díaz

This paper considers a canonical clustering problem where one receives unlabeled samples drawn from a balanced mixture of two elliptical distributions and aims for a classifier to estimate the labels.


Bridging Convex and Nonconvex Optimization in Robust PCA: Noise, Outliers, and Missing Data

no code implementations15 Jan 2020 Yuxin Chen, Jianqing Fan, Cong Ma, Yuling Yan

This paper delivers improved theoretical guarantees for the convex programming approach in low-rank matrix estimation, in the presence of (1) random noise, (2) gross sparse outliers, and (3) missing data.

Inference and Uncertainty Quantification for Noisy Matrix Completion

no code implementations10 Jun 2019 Yuxin Chen, Jianqing Fan, Cong Ma, Yuling Yan

As a byproduct, we obtain a sharp characterization of the estimation accuracy of our de-biased estimators, which, to the best of our knowledge, are the first tractable algorithms that provably achieve full statistical efficiency (including the preconstant).

Matrix Completion

Noisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex Optimization

no code implementations20 Feb 2019 Yuxin Chen, Yuejie Chi, Jianqing Fan, Cong Ma, Yuling Yan

This paper studies noisy low-rank matrix completion: given partial and noisy entries of a large low-rank matrix, the goal is to estimate the underlying matrix faithfully and efficiently.

Low-Rank Matrix Completion

Motion Saliency Based Automatic Delineation of Glottis Contour in High-speed Digital Images

no code implementations9 Apr 2017 Xin Chen, Emma Marriott, Yuling Yan

In recent years, high-speed videoendoscopy (HSV) has significantly aided the diagnosis of voice pathologies and furthered the understanding the voice production in recent years.

Cannot find the paper you are looking for? You can Submit a new open access paper.