Search Results for author: Jian Hong

Found 7 papers, 0 papers with code

Rademacher upper bounds for cross-validation errors with an application to the lasso

no code implementations30 Jul 2020 Ning Xu, Timothy C. G. Fisher, Jian Hong

We establish a general upper bound for $K$-fold cross-validation ($K$-CV) errors that can be adapted to many $K$-CV-based estimators and learning algorithms.

Blocking Variable Selection

Instrument variable detection with graph learning : an application to high dimensional GIS-census data for house pricing

no code implementations30 Jul 2020 Ning Xu, Timothy C. G. Fisher, Jian Hong

In this paper, we merge two well-known tools from machine learning and biostatistics---variable selection algorithms and probablistic graphs---to estimate house prices and the corresponding causal structure using 2010 data on Sydney.

BIG-bench Machine Learning Econometrics +4

Accuracy and stability of solar variable selection comparison under complicated dependence structures

no code implementations30 Jul 2020 Ning Xu, Timothy C. G. Fisher, Jian Hong

In this paper we focus on the empirical variable-selection peformance of subsample-ordered least angle regression (Solar) -- a novel ultrahigh dimensional redesign of lasso -- on the empirical data with complicated dependence structures and, hence, severe multicollinearity and grouping effect issues.

Graph Learning regression +1

$\left( β, \varpi \right)$-stability for cross-validation and the choice of the number of folds

no code implementations20 May 2017 Ning Xu, Jian Hong, Timothy C. G. Fisher

The $\left( \beta, \varpi \right)$-stability mathematically connects the generalization ability and the stability of the cross-validated model via the Rademacher complexity.

Model Selection

Finite-sample and asymptotic analysis of generalization ability with an application to penalized regression

no code implementations12 Sep 2016 Ning Xu, Jian Hong, Timothy C. G. Fisher

We show that the error bounds may be used for tuning key estimation hyper-parameters, such as the number of folds $K$ in cross-validation.

regression

Model selection consistency from the perspective of generalization ability and VC theory with an application to Lasso

no code implementations1 Jun 2016 Ning Xu, Jian Hong, Timothy C. G. Fisher

In this paper, we study model selection from the perspective of generalization ability, under the framework of structural risk minimization (SRM) and Vapnik-Chervonenkis (VC) theory.

Model Selection

Cannot find the paper you are looking for? You can Submit a new open access paper.