imbalanced classification
32 papers with code • 0 benchmarks • 2 datasets
learning classifier from class-imbalanced data
Benchmarks
These leaderboards are used to track progress in imbalanced classification
Most implemented papers
MoleculeNet: A Benchmark for Molecular Machine Learning
However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods.
Deep Reinforcement Learning for Imbalanced Classification
The agent finally finds an optimal classification policy in imbalanced data under the guidance of specific reward function and beneficial learning environment.
MNIST-MIX: A Multi-language Handwritten Digit Recognition Dataset
In this letter, we contribute a multi-language handwritten digit recognition dataset named MNIST-MIX, which is the largest dataset of the same type in terms of both languages and data samples.
Long-tailed Recognition by Routing Diverse Distribution-Aware Experts
We take a dynamic view of the training data and provide a principled model bias and variance analysis as the training data fluctuates: Existing long-tail classifiers invariably increase the model variance and the head-tail model bias gap remains large, due to more and larger confusion with hard negatives for the tail.
MESA: Boost Ensemble Imbalanced Learning with MEta-SAmpler
This makes MESA generally applicable to most of the existing learning models and the meta-sampler can be efficiently applied to new tasks.
A Large-Scale Database for Graph Representation Learning
With the rapid emergence of graph representation learning, the construction of new large-scale datasets is necessary to distinguish model capabilities and accurately assess the strengths and weaknesses of each technique.
Well-classified Examples are Underestimated in Classification with Deep Neural Networks
The conventional wisdom behind learning deep classification models is to focus on bad-classified examples and ignore well-classified examples that are far from the decision boundary.
Box Drawings for Learning with Imbalanced Data
The vast majority of real world classification problems are imbalanced, meaning there are far fewer data from the class of interest (the positive class) than from other classes.
Boosting with Lexicographic Programming: Addressing Class Imbalance without Cost Tuning
We then demonstrate how this insight can be used to attain a good compromise between the rare and abundant classes without having to resort to cost set tuning, which has long been the norm for imbalanced classification.
CUSBoost: Cluster-based Under-sampling with Boosting for Imbalanced Classification
We evaluated the performance of CUSBoost algorithm with the state-of-the-art methods based on ensemble learning like AdaBoost, RUSBoost, SMOTEBoost on 13 imbalance binary and multi-class datasets with various imbalance ratios.