Paper

Stability of the Stochastic Gradient Method for an Approximated Large Scale Kernel Machine

In this paper we measured the stability of stochastic gradient method (SGM) for learning an approximated Fourier primal support vector machine. The stability of an algorithm is considered by measuring the generalization error in terms of the absolute difference between the test and the training error. Our problem is to learn an approximated kernel function using random Fourier features for a binary classification problem via online convex optimization settings. For a convex, Lipschitz continuous and smooth loss function, given reasonable number of iterations stochastic gradient method is stable. We showed that with a high probability SGM generalizes well for an approximated kernel under given assumptions.We empirically verified the theoretical findings for different parameters using several data sets.

Results in Papers With Code
(↓ scroll down to see all results)