A Support Vector Machine, or SVM, is a non-parametric supervised learning model. For non-linear classification and regression, they utilise the kernel trick to map inputs to high-dimensional feature spaces. SVMs construct a hyper-plane or set of hyper-planes in a high or infinite dimensional space, which can be used for classification, regression or other tasks. Intuitively, a good separation is achieved by the hyper-plane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. The figure to the right shows the decision function for a linearly separable problem, with three samples on the margin boundaries, called “support vectors”.
Source: scikit-learn
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Classification | 86 | 13.56% |
BIG-bench Machine Learning | 55 | 8.68% |
General Classification | 43 | 6.78% |
Electroencephalogram (EEG) | 22 | 3.47% |
Anomaly Detection | 16 | 2.52% |
Emotion Recognition | 15 | 2.37% |
Sentiment Analysis | 14 | 2.21% |
Image Classification | 13 | 2.05% |
Dimensionality Reduction | 10 | 1.58% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |