AVGZSLNet: Audio-Visual Generalized Zero-Shot Learning by Reconstructing Label Features from Multi-Modal Embeddings

27 May 2020  ·  Pratik Mazumder, Pravendra Singh, Kranti Kumar Parida, Vinay P. Namboodiri ·

In this paper, we propose a novel approach for generalized zero-shot learning in a multi-modal setting, where we have novel classes of audio/video during testing that are not seen during training. We use the semantic relatedness of text embeddings as a means for zero-shot learning by aligning audio and video embeddings with the corresponding class label text feature space. Our approach uses a cross-modal decoder and a composite triplet loss. The cross-modal decoder enforces a constraint that the class label text features can be reconstructed from the audio and video embeddings of data points. This helps the audio and video embeddings to move closer to the class label text embedding. The composite triplet loss makes use of the audio, video, and text embeddings. It helps bring the embeddings from the same class closer and push away the embeddings from different classes in a multi-modal setting. This helps the network to perform better on the multi-modal zero-shot learning task. Importantly, our multi-modal zero-shot learning approach works even if a modality is missing at test time. We test our approach on the generalized zero-shot classification and retrieval tasks and show that our approach outperforms other models in the presence of a single modality as well as in the presence of multiple modalities. We validate our approach by comparing it with previous approaches and using various ablations.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
GZSL Video Classification ActivityNet-GZSL(main) AVGZSLNet HM 6.44 # 6
ZSL 5.40 # 7
GZSL Video Classification UCF-GZSL(main) AVGZSLNet HM 18.05 # 6
ZSL 13.65 # 6
GZSL Video Classification VGGSound-GZSL(main) AVGZSLNet HM 5.83 # 6
ZSL 5.28 # 5

Methods