Such applications demand prediction models with small storage and computational complexity that do not compromise significantly on accuracy.
Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.
#4 best model for Node Classification on Wikipedia
It provides native Python implementations of popular multi-label classification methods alongside a novel framework for label space partitioning and division.
Further analysis of experimental results demonstrates that the proposed methods not only capture the correlations between labels, but also select the most informative words automatically when predicting different labels.
These algorithms are not directly applicable to large-scale learning problems since they scale poorly with the dimensionality of the gradients and the number of tasks.
SOTA for Multi-Task Learning on CelebA
We propose sparsemax, a new activation function similar to the traditional softmax, but able to output sparse probabilities.
The field of medical diagnostics contains a wealth of challenges which closely resemble classical machine learning problems; practical constraints, however, complicate the translation of these endpoints naively into classical architectures.
In this work, we present DiSMEC, which is a large-scale distributed framework for learning one-versus-rest linear classifiers coupled with explicit capacity control to control model size.
Extreme multi-label classification (XMLC) is a problem of tagging an instance with a small subset of relevant labels chosen from an extremely large pool of possible labels.