no code implementations • 17 Feb 2020 • Huijie Feng, Chunpeng Wu, Guoyang Chen, Weifeng Zhang, Yang Ning
In this work, we derive a new regularized risk, in which the regularizer can adaptively encourage the accuracy and robustness of the smoothed counterpart when training the base classifier.
no code implementations • 26 May 2019 • Huijie Feng, Yang Ning, Jiwei Zhao
Statistically, we show that the finite sample error bound for estimating $\theta$ in $\ell_2$ norm is $(s\log d/n)^{\beta/(2\beta+1)}$, where $d$ is the dimension of $\theta$, $s$ is the sparsity level, $n$ is the sample size and $\beta$ is the smoothness of the conditional density of $X$ given the response $Y$ and the covariates $Z$.