imbalanced classification
62 papers with code • 0 benchmarks • 7 datasets
learning classifier from class-imbalanced data
Benchmarks
These leaderboards are used to track progress in imbalanced classification
Datasets
Most implemented papers
CUSBoost: Cluster-based Under-sampling with Boosting for Imbalanced Classification
We evaluated the performance of CUSBoost algorithm with the state-of-the-art methods based on ensemble learning like AdaBoost, RUSBoost, SMOTEBoost on 13 imbalance binary and multi-class datasets with various imbalance ratios.
Imbalance-XGBoost: Leveraging Weighted and Focal Losses for Binary Label-Imbalanced Classification with XGBoost
The paper presents Imbalance-XGBoost, a Python package that combines the powerful XGBoost software with weighted and focal losses to tackle binary label-imbalanced classification tasks.
Self-paced Ensemble for Highly Imbalanced Massive Data Classification
To tackle this problem, we conduct deep investigations into the nature of class imbalance, which reveals that not only the disproportion between classes, but also other difficulties embedded in the nature of data, especially, noises and class overlapping, prevent us from learning effective classifiers.
Self-supervised Label Augmentation via Input Transformations
Our main idea is to learn a single unified task with respect to the joint distribution of the original and self-supervised labels, i. e., we augment original labels via self-supervision of input transformation.
M2m: Imbalanced Classification via Major-to-minor Translation
In most real-world scenarios, labeled training datasets are highly class-imbalanced, where deep neural networks suffer from generalizing to a balanced testing criterion.
Class-Weighted Classification: Trade-offs and Robust Approaches
We define a robust risk that minimizes risk over a set of weightings and show excess risk bounds for this problem.
Hashing-Based Undersampling Ensemble for Imbalanced Pattern Classification Problems
Samples in the majority class are divided into many subspaces by a hashing method.
Meta-Learning for One-Class Classification with Few Examples using Order-Equivariant Network
This paper presents a meta-learning framework for few-shots One-Class Classification (OCC) at test-time, a setting where labeled examples are only available for the positive class, and no supervision is given for the negative example.
Follow the bisector: a simple method for multi-objective optimization
This descent direction is based on the normalized gradients of the individual losses.
Variational Disentanglement for Rare Event Modeling
Combining the increasing availability and abundance of healthcare data and the current advances in machine learning methods have created renewed opportunities to improve clinical decision support systems.