Zero-Shot Logit Adjustment

25 Apr 2022  ·  Dubing Chen, Yuming Shen, Haofeng Zhang, Philip H. S. Torr ·

Semantic-descriptor-based Generalized Zero-Shot Learning (GZSL) poses challenges in recognizing novel classes in the test phase. The development of generative models enables current GZSL techniques to probe further into the semantic-visual link, culminating in a two-stage form that includes a generator and a classifier. However, existing generation-based methods focus on enhancing the generator's effect while neglecting the improvement of the classifier. In this paper, we first analyze of two properties of the generated pseudo unseen samples: bias and homogeneity. Then, we perform variational Bayesian inference to back-derive the evaluation metrics, which reflects the balance of the seen and unseen classes. As a consequence of our derivation, the aforementioned two properties are incorporated into the classifier training as seen-unseen priors via logit adjustment. The Zero-Shot Logit Adjustment further puts semantic-based classifiers into effect in generation-based GZSL. Our experiments demonstrate that the proposed technique achieves state-of-the-art when combined with the basic generator, and it can improve various generative Zero-Shot Learning frameworks. Our codes are available on https://github.com/cdb342/IJCAI-2022-ZLA.

PDF Abstract

Results from the Paper


 Ranked #1 on Generalized Zero-Shot Learning on AwA2 (Accuracy Unseen metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Generalized Zero-Shot Learning aPY WGAN+ZLAP H 46 # 1
Generalized Zero-Shot Learning AwA2 WGAN+ZLAP Accuracy Unseen 65.4 # 1
Accuracy Seen 82.2 # 1
H 72.8 # 1
Generalized Zero-Shot Learning Caltech-UCSD Birds 200 - 2011 WGAN+ZLAP H 68.7 # 1
Generalized Zero-Shot Learning SUN Attribute WGAN+ZLAP H 43.2 # 1

Methods


No methods listed for this paper. Add relevant methods here