164 papers with code • 7 benchmarks • 20 datasets
Multi-Label Classification is the supervised learning problem where an instance may be associated with multiple labels. This is an extension of single-label classification (i.e., multi-class, or binary) where each instance is only associated with a single class label.
In this work, we introduce a series of architecture modifications that aim to boost neural networks' accuracy, while retaining their GPU training and inference efficiency.
Ranked #4 on Fine-Grained Image Classification on Oxford 102 Flowers (using extra training data)
In this work we present Ludwig, a flexible, extensible and easy to use toolbox which allows users to train deep learning models and use them for obtaining predictions without writing code.
Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.
Ranked #3 on Malware Detection on Android Malware Dataset
In particular, the prediction of aspect-sentiment pairs is converted into multi-label classification, aiming to capture the dependency between words in a pair.
Such applications demand prediction models with small storage and computational complexity that do not compromise significantly on accuracy.
It provides native Python implementations of popular multi-label classification methods alongside a novel framework for label space partitioning and division.
These algorithms are not directly applicable to large-scale learning problems since they scale poorly with the dimensionality of the gradients and the number of tasks.
Ranked #1 on Multi-Task Learning on CelebA
The field of medical diagnostics contains a wealth of challenges which closely resemble classical machine learning problems; practical constraints, however, complicate the translation of these endpoints naively into classical architectures.