no code implementations • 13 Mar 2024 • Hong Hu, Yue M. Lu, Theodor Misiakiewicz
On the other hand, if $p = o(n)$, the number of random features $p$ is the limiting factor and RFRR test error matches the approximation error of the random feature model class (akin to taking $n = \infty$).
1 code implementation • 26 Dec 2022 • Cheng Shi, Liming Pan, Hong Hu, Ivan Dokmanić
Motivated by experimental observations of ``transductive'' double descent in key networks and datasets, we use analytical tools from statistical physics and random matrix theory to precisely characterize generalization in simple graph convolution networks on the contextual stochastic block model.
no code implementations • 30 May 2022 • Lechao Xiao, Hong Hu, Theodor Misiakiewicz, Yue M. Lu, Jeffrey Pennington
As modern machine learning models continue to advance the computational frontier, it has become increasingly important to develop precise estimates for expected performance improvements under different model and data scaling regimes.
no code implementations • 13 May 2022 • Hong Hu, Yue M. Lu
The generalization performance of kernel ridge regression (KRR) exhibits a multi-phased pattern that crucially depends on the scaling relationship between the sample size $n$ and the underlying dimension $d$.
no code implementations • 13 Mar 2020 • Hanbin Dai, Liangbo Zhou, Feng Zhang, Zhengyu Zhang, Hong Hu, Xiatian Zhu, Mao Ye
Taking them together, we formulate a novel Distribution-Aware coordinate Representation for Keypoint (DARK) method.
1 code implementation • 27 Mar 2019 • Hong Hu, Yue M. Lu
In sparse linear regression, the SLOPE estimator generalizes LASSO by penalizing different coordinates of the estimate according to their magnitudes.
Information Theory Information Theory Statistics Theory Statistics Theory
no code implementations • NeurIPS 2019 • Chuang Wang, Hong Hu, Yue M. Lu
We present a theoretical analysis of the training process for a single-layer GAN fed by high-dimensional input data.