no code implementations • 9 Aug 2022 • Jacques Wainer
This paper proposes a Bayesian model to compare multiple algorithms on multiple data sets, on any metric.
no code implementations • 26 Aug 2020 • Jacques Wainer, Pablo Fonseca
In general, the selection of the hyperparameters is a non-convex optimization problem and thus many algorithms have been proposed to solve it, among them: grid search, random search, Bayesian optimization, simulated annealing, particle swarm optimization, Nelder Mead, and others.
no code implementations • 16 Oct 2018 • Jacques Wainer
The paper also examines a selection of newer algorithms within the categories of specialized algorithms, oversampling, and ensemble methods.
1 code implementation • 25 Sep 2018 • Jacques Wainer, Gavin Cawley
The usual approach is to apply a nested cross-validation procedure; hyperparameter selection is performed in the inner cross-validation, while the outer cross-validation computes an unbiased estimate of the expected accuracy of the algorithm \emph{with cross-validation based hyperparameter tuning}.
no code implementations • 13 Jun 2016 • Pedro Ribeiro Mendes Júnior, Terrance E. Boult, Jacques Wainer, Anderson Rocha
In the open-set scenario, however, a test sample can belong to none of the known classes and the classifier must properly reject it by classifying it as unknown.
no code implementations • 2 Jun 2016 • Jacques Wainer
We tested 14 very different classification algorithms (random forest, gradient boosting machines, SVM - linear, polynomial, and RBF - 1-hidden-layer neural nets, extreme learning machines, k-nearest neighbors and a bagging of knn, naive Bayes, learning vector quantization, elastic net logistic regression, sparse linear discriminant analysis, and a boosting of linear classifiers) on 115 real life binary datasets.