Generalized Zero-Shot Learning

55 papers with code • 12 benchmarks • 10 datasets

In generalized zero shot learning (GZSL), the set of classes are split into seen and unseen classes, where training relies on the semantic features of the seen and unseen classes and the visual representations of only the seen classes, while testing uses the visual representations of the seen and unseen classes.

Most implemented papers

Choose Your Neuron: Incorporating Domain Knowledge through Neuron-Importance

ramprs/neuron-importance-zsl ECCV 2018

Our approach, which we call Neuron Importance-AwareWeight Transfer (NIWT), learns to map domain knowledge about novel "unseen" classes onto this dictionary of learned concepts and then optimizes for network parameters that can effectively combine these concepts - essentially learning classifiers by discovering and composing learned semantic concepts in deep networks.

Generative Dual Adversarial Network for Generalized Zero-shot Learning

stevehuanghe/GDAN CVPR 2019

Most previous models try to learn a fixed one-directional mapping between visual and semantic space, while some recently proposed generative methods try to generate image features for unseen classes so that the zero-shot learning problem becomes a traditional fully-supervised classification problem.

Unifying Unsupervised Domain Adaptation and Zero-Shot Visual Recognition

hellowangqian/domain-adaptation-capls 25 Mar 2019

Unsupervised domain adaptation aims to transfer knowledge from a source domain to a target domain so that the target domain data can be recognized without any explicit labelling information for this domain.

Leveraging the Invariant Side of Generative Zero-Shot Learning

lijin118/LisGAN CVPR 2019

In this paper, we take the advantage of generative adversarial networks (GANs) and propose a novel method, named leveraging invariant side GAN (LisGAN), which can directly generate the unseen features from random noises which are conditioned by the semantic descriptions.

Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders

edgarschnfld/CADA-VAE-PyTorch CVPR 2019

Many approaches in generalized zero-shot learning rely on cross-modal mapping between the image feature space and the class embedding space.

Zero-shot Word Sense Disambiguation using Sense Definition Embeddings

malllabiisc/EWISE ACL 2019

To overcome this challenge, we propose Extended WSD Incorporating Sense Embeddings (EWISE), a supervised model to perform WSD by predicting over a continuous sense embedding space as opposed to a discrete label space.

A Meta-Learning Framework for Generalized Zero-Shot Learning

vkverma01/meta-gzsl 10 Sep 2019

Our proposed model yields significant improvements on standard ZSL as well as more challenging GZSL setting.

Alleviating Feature Confusion for Generative Zero-shot Learning

lijin118/AFC-GAN 17 Sep 2019

An inevitable issue of such a paradigm is that the synthesized unseen features are prone to seen references and incapable to reflect the novelty and diversity of real unseen instances.

Transductive Zero-Shot Learning for 3D Point Cloud Classification

ali-chr/Transductive_ZSL_3D_Point_Cloud 16 Dec 2019

This paper extends, for the first time, transductive Zero-Shot Learning (ZSL) and Generalized Zero-Shot Learning (GZSL) approaches to the domain of 3D point cloud classification.

Latent Embedding Feedback and Discriminative Features for Zero-Shot Classification

akshitac8/tfvaegan ECCV 2020

We propose to enforce semantic consistency at all stages of (generalized) zero-shot learning: training, feature synthesis and classification.