Paper

ORCCA: Optimal Randomized Canonical Correlation Analysis

Random features approach has been widely used for kernel approximation in large-scale machine learning. A number of recent studies have explored data-dependent sampling of features, modifying the stochastic oracle from which random features are sampled. While proposed techniques in this realm improve the approximation, their suitability is often verified on a single learning task. In this paper, we propose a task-specific scoring rule for selecting random features, which can be employed for different applications with some adjustments. We restrict our attention to Canonical Correlation Analysis (CCA), and we provide a novel, principled guide for finding the score function maximizing the canonical correlations. We prove that this method, called ORCCA, can outperform (in expectation) the corresponding Kernel CCA with a default kernel. Numerical experiments verify that ORCCA is significantly superior than other approximation techniques in the CCA task.

Results in Papers With Code
(↓ scroll down to see all results)