Generalized Zero-Shot Learning
55 papers with code • 12 benchmarks • 10 datasets
In generalized zero shot learning (GZSL), the set of classes are split into seen and unseen classes, where training relies on the semantic features of the seen and unseen classes and the visual representations of only the seen classes, while testing uses the visual representations of the seen and unseen classes.
Datasets
Most implemented papers
Choose Your Neuron: Incorporating Domain Knowledge through Neuron-Importance
Our approach, which we call Neuron Importance-AwareWeight Transfer (NIWT), learns to map domain knowledge about novel "unseen" classes onto this dictionary of learned concepts and then optimizes for network parameters that can effectively combine these concepts - essentially learning classifiers by discovering and composing learned semantic concepts in deep networks.
Generative Dual Adversarial Network for Generalized Zero-shot Learning
Most previous models try to learn a fixed one-directional mapping between visual and semantic space, while some recently proposed generative methods try to generate image features for unseen classes so that the zero-shot learning problem becomes a traditional fully-supervised classification problem.
Unifying Unsupervised Domain Adaptation and Zero-Shot Visual Recognition
Unsupervised domain adaptation aims to transfer knowledge from a source domain to a target domain so that the target domain data can be recognized without any explicit labelling information for this domain.
Leveraging the Invariant Side of Generative Zero-Shot Learning
In this paper, we take the advantage of generative adversarial networks (GANs) and propose a novel method, named leveraging invariant side GAN (LisGAN), which can directly generate the unseen features from random noises which are conditioned by the semantic descriptions.
Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders
Many approaches in generalized zero-shot learning rely on cross-modal mapping between the image feature space and the class embedding space.
Zero-shot Word Sense Disambiguation using Sense Definition Embeddings
To overcome this challenge, we propose Extended WSD Incorporating Sense Embeddings (EWISE), a supervised model to perform WSD by predicting over a continuous sense embedding space as opposed to a discrete label space.
A Meta-Learning Framework for Generalized Zero-Shot Learning
Our proposed model yields significant improvements on standard ZSL as well as more challenging GZSL setting.
Alleviating Feature Confusion for Generative Zero-shot Learning
An inevitable issue of such a paradigm is that the synthesized unseen features are prone to seen references and incapable to reflect the novelty and diversity of real unseen instances.
Transductive Zero-Shot Learning for 3D Point Cloud Classification
This paper extends, for the first time, transductive Zero-Shot Learning (ZSL) and Generalized Zero-Shot Learning (GZSL) approaches to the domain of 3D point cloud classification.
Latent Embedding Feedback and Discriminative Features for Zero-Shot Classification
We propose to enforce semantic consistency at all stages of (generalized) zero-shot learning: training, feature synthesis and classification.