Superclass-Conditional Gaussian Mixture Model For Learning Fine-Grained Embeddings

Learning fine-grained embeddings is essential for extending the generalizability of models pretrained on "coarsely" annotated labels (e.g., animals). It is crucial to fields where fine-grained labeling (e.g., breeds) requires strong domain expertise thus is prohibitive, such as medicine, but predicting them is desirable. The dilemma necessitates the adaptation of a "coarsely" pretrained model to new tasks with a few unseen "finer-grained" training labels. However, pretraining with only coarse supervision tends to suppress intra-class variation, which is indispensable for cross-granularity adaptation. In this paper, we develop a training framework underlain by a novel superclass conditional Gaussian mixture model (SCGM). SCGM imitates the generative process of samples from hierarchies of classes by means of latent variable modeling of the superclass-subclass relationships. The framework is agnostic to the encoders and only adds a few distribution related parameters, thus is efficient, and flexible to different domains. The model parameters are learned end-to-end by maximum-likelihood estimation via an Expectation-Maximization algorithm in a principled manner. Extensive experimental results on benchmark datasets and a real-life medical dataset demonstrate the effectiveness of our method.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here