Growing Adaptive Multi-hyperplane Machines

ICML 2020  ·  Nemanja Djuric, Zhuang Wang, Slobodan Vucetic ·

Adaptive Multi-hyperplane Machine (AMM) is an online algorithm for learning Multi-hyperplane Machine (MM), a classification model which allows multiple hyperplanes per class. AMM is based on Stochastic Gradient Descent (SGD), with training time comparable to linear Support Vector Machine (SVM) and significantly higher accuracy. On the other hand, empirical results indicate there is a large accuracy gap between AMM and non-linear SVMs. In this paper we show that this performance gap is not due to limited representability of MM model, as it can represent arbitrary concepts. We set to explain a connection between AMM and LVQ, and introduce a novel Growing AMM (GAMM) algorithm motivated by Growing LVQ, that imputes duplicate hyperplanes into MM model during SGD training. We provide theoretical results showing that GAMM has favorable convergence properties, and analyze the generalization bound of MM models. Experiments indicate that GAMM achieves significantly improved accuracy on non-linear problems with only slightly slower training compared to AMM. On some tasks GAMM is even more accurate than non-linear SVM and other popular classifiers such as Neural Networks and Random Forests.



  Add Datasets introduced or used in this paper

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.


No methods listed for this paper. Add relevant methods here