no code implementations • ICML 2020 • Ferdinando Cicalese, Francisco Sergio de Freitas Filho, Eduardo Laber, Marco Molinaro
We focus on the model of Machine Teaching with a black box learner introduced in [Dasgupta et al., ICML 2019], where the teaching is done interactively without having any knowledge of the Learner's algorithm and class of hypotheses, apart from the fact that it contains the target hypothesis $h^*$.
1 code implementation • 4 Feb 2022 • Sergio Filho, Eduardo Laber, Pedro Lazera, Marco Molinaro
Consider a scenario in which we have a huge labeled dataset ${\cal D}$ and a limited time to train some given learner using ${\cal D}$.
1 code implementation • 29 Dec 2021 • Eduardo Laber, Lucas Murtinho, Felipe Oliveira
A number of recent works have employed decision trees for the construction of explainable partitions that aim to minimize the $k$-means cost function.
no code implementations • 5 Jan 2021 • Eduardo Laber, Lucas Murtinho
For the $k$-means and $k$-medians problems our upper bounds improve those obtained by [Moshkovitz et.
1 code implementation • 1 Dec 2019 • Matheus Werner, Eduardo Laber
This distance proved to be quite effective, obtaining state-of-art error rates for classification tasks, but is also impracticable for large collections/documents due to its computational complexity.
Ranked #1 on Document Classification on Classic
no code implementations • ICML 2018 • Eduardo Laber, Marco Molinaro, Felipe Mello Pereira
In practice, decision-tree inducers use heuristics for finding splits with small impurity when they consider nominal attributes with a large number of distinct values.
no code implementations • 11 Sep 2013 • Ferdinando Cicalese, Eduardo Laber, Aline Medeiros Saettler
In several applications of automatic diagnosis and active learning a central problem is the evaluation of a discrete function by adaptively querying the values of its variables until the values read uniquely determine the value of the function.