no code implementations • 29 Aug 2023 • TaeHee Lee, Jun S. Liu
By overcoming the limitations of traditional Gaussian process models, our proposed framework offers a robust and versatile tool for a wide array of applications.
no code implementations • 13 Oct 2020 • Xin Xing, Yu Gui, Chenguang Dai, Jun S. Liu
Deep neural networks (DNNs) have become increasingly popular and achieved outstanding performance in predictive tasks.
no code implementations • 15 Jul 2020 • Zhirui Hu, Zheng Tracy Ke, Jun S. Liu
The success of deep learning has inspired recent interests in applying neural networks in statistical inference.
no code implementations • ICLR 2020 • Xin Xing, Long Sha, Pengyu Hong, Zuofeng Shang, Jun S. Liu
Deep neural networks (DNNs) can be huge in size, requiring a considerable a mount of energy and computational resources to operate, which limits their applications in numerous scenarios.
1 code implementation • 20 Feb 2020 • Chenguang Dai, Buyu Lin, Xin Xing, Jun S. Liu
Selecting relevant features associated with a given response variable is an important issue in many scientific fields.
Methodology
no code implementations • 6 Nov 2019 • Xin Xing, Zuofeng Shang, Pang Du, Ping Ma, Wenxuan Zhong, Jun S. Liu
Under such a framework, the probability density comparison is equivalent to testing the presence/absence of interactions.
no code implementations • 27 Jul 2019 • Chenguang Dai, Jun S. Liu
The optimization formulation provides us a new way to establish the convergence rate of the Wang-Landau algorithm, by exploiting the fact that almost surely, the density estimates (on the logarithmic scale) remain in a compact set, upon which the objective function is strongly convex.
no code implementations • 28 May 2019 • Minsuk Shin, Young Lee, Jun S. Liu
Uncertainty quantification has been a core of the statistical machine learning, but its computational bottleneck has been a serious challenge for both Bayesians and frequentists.
no code implementations • 5 Oct 2018 • Xu Han, Hongsu Wang, Sanqian Zhang, Qunchao Fu, Jun S. Liu
In this paper, we develop a low than character feature embedding called radical embedding, and apply it on LSTM model for sentence segmentation of pre modern Chinese texts.
2 code implementations • 5 Oct 2018 • Ruzhang Zhao, Pengyu Hong, Jun S. Liu
Relief based algorithms have often been claimed to uncover feature interactions.
no code implementations • 25 Nov 2015 • Matey Neykov, Jun S. Liu, Tianxi Cai
In the present paper we analyze algorithms based on covariance screening and least squares with $L_1$ penalization (i. e. LASSO) and demonstrate that they can also enjoy optimal (up to a scalar) rescaled sample size in terms of support recovery, albeit under slightly different assumptions on $f$ and $\varepsilon$ compared to the SIR based algorithms.
no code implementations • 7 Nov 2015 • Matey Neykov, Qian Lin, Jun S. Liu
When $s=O(p^{1-\delta})$ for some $\delta>0$, we demonstrate that both procedures can succeed in recovering the support of $\boldsymbol{\beta}$ as long as the rescaled sample size $\kappa=\frac{n}{s\log(p-s)}$ is larger than a certain critical threshold.
no code implementations • 30 Oct 2015 • Matey Neykov, Yang Ning, Jun S. Liu, Han Liu
Our main theoretical contribution is to establish a unified Z-estimation theory of confidence regions for high dimensional problems.
1 code implementation • 8 Jun 2015 • Viktoriya Krakovna, Jiong Du, Jun S. Liu
In many practical applications, it is of interest which features and feature interactions are relevant to the prediction task.