Paper

Simple and Almost Assumption-Free Out-of-Sample Bound for Random Feature Mapping

Random feature mapping (RFM) is a popular method for speeding up kernel methods at the cost of losing a little accuracy. We study kernel ridge regression with random feature mapping (RFM-KRR) and establish novel out-of-sample error upper and lower bounds. While out-of-sample bounds for RFM-KRR have been established by prior work, this paper's theories are highly interesting for two reasons. On the one hand, our theories are based on weak and valid assumptions. In contrast, the existing theories are based on various uncheckable assumptions, which makes it unclear whether their bounds are the nature of RFM-KRR or simply the consequence of strong assumptions. On the other hand, our analysis is completely based on elementary linear algebra and thereby easy to read and verify. Finally, our experiments lend empirical supports to the theories.

Results in Papers With Code
(↓ scroll down to see all results)