Deep Generative Mixture Model for Robust Imbalance Classification

Discovering hidden pattern from imbalanced data is a critical issue in various real-world applications. Existing classification methods usually suffer from the limitation of data especially for minority classes, and result in unstable prediction and low performance. In this paper, a deep generative classifier is proposed to mitigate this issue via both model perturbation and data perturbation. Specially, the proposed generative classifier is derived from a deep latent variable model where two variables are involved. One variable is to capture the essential information of the original data, denoted as latent codes, which are represented by a probability distribution rather than a single fixed value. The learnt distribution aims to enforce the uncertainty of model and implement model perturbation, thus, lead to stable predictions. The other variable is a prior to latent codes so that the codes are restricted to lie on components in Gaussian Mixture Model. As a confounder affecting generative processes of data (feature/label), the latent variables are supposed to capture the discriminative latent distribution and implement data perturbation. Extensive experiments have been conducted on widely-used real imbalanced image datasets. Experimental results demonstrate the superiority of our proposed model by comparing with popular imbalanced classification baselines on imbalance classification task.

PDF

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here