Exponential Error Convergence in Data Classification with Optimized Random Features: Acceleration by Quantum Machine Learning

16 Jun 2021  ·  Hayata Yamasaki, Sho Sonoda ·

Classification is a common task in machine learning. Random features (RFs) stand as a central technique for scalable learning algorithms based on kernel methods, and more recently proposed optimized random features, sampled depending on the model and the data distribution, can significantly reduce and provably minimize the required number of features. However, existing research on classification using optimized RFs has suffered from computational hardness in sampling each optimized RF; moreover, it has failed to achieve the exponentially fast error-convergence speed that other state-of-the-art kernel methods can achieve under a low-noise condition. To overcome these slowdowns, we here construct a classification algorithm with optimized RFs accelerated by means of quantum machine learning (QML) and study its runtime to clarify overall advantage. We prove that our algorithm can achieve the exponential error convergence under the low-noise condition even with optimized RFs; at the same time, our algorithm can exploit the advantage of the significant reduction of the number of features without the computational hardness owing to QML. These results discover a promising application of QML to acceleration of the leading kernel-based classification algorithm without ruining its wide applicability and the exponential error-convergence speed.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods