Zero-Shot Learning

561 papers with code • 18 benchmarks • 29 datasets

Zero-shot learning (ZSL) is a model's ability to detect classes never seen during training. The condition is that the classes are not known during supervised learning.

Earlier work in zero-shot learning use attributes in a two-step approach to infer unknown classes. In the computer vision context, more recent advances learn mappings from image feature space to semantic space. Other approaches learn non-linear multimodal embeddings. In the modern NLP context, language models can be evaluated on downstream tasks without fine tuning.

Benchmark datasets for zero-shot learning include aPY, AwA, and CUB, among others.

( Image credit: Prototypical Networks for Few shot Learning in PyTorch )

Further readings:

Libraries

Use these libraries to find Zero-Shot Learning models and implementations

Most implemented papers

Flamingo: a Visual Language Model for Few-Shot Learning

mlfoundations/open_flamingo DeepMind 2022

Building models that can be rapidly adapted to novel tasks using only a handful of annotated examples is an open challenge for multimodal machine learning research.

Improving zero-shot learning by mitigating the hubness problem

facebookresearch/MUSE 20 Dec 2014

The zero-shot paradigm exploits vector-based word representations extracted from text corpora with unsupervised methods to learn general mapping functions from other feature spaces onto word space, where the words associated to the nearest neighbours of the mapped vectors are used as their linguistic labels.

Learning a Deep Embedding Model for Zero-Shot Learning

lzrobots/DeepEmbeddingModel_ZSL CVPR 2017

In this paper we argue that the key to make deep ZSL models succeed is to choose the right embedding space.

Semantic Autoencoder for Zero-Shot Learning

mvp18/Popular-ZSL-Algorithms CVPR 2017

We show that with this additional reconstruction constraint, the learned projection function from the seen classes is able to generalise better to the new unseen classes.

Feature Generating Networks for Zero-Shot Learning

akku1506/Feature-Generating-Networks-for-ZSL CVPR 2018

Suffering from the extreme training data imbalance between seen and unseen classes, most of existing state-of-the-art approaches fail to achieve satisfactory results for the challenging generalized zero-shot learning task.

Zero-shot Recognition via Semantic Embeddings and Knowledge Graphs

JudyYe/zero-shot-gcn CVPR 2018

Given a learned knowledge graph (KG), our approach takes as input semantic embeddings for each node (representing visual category).

Rethinking Knowledge Graph Propagation for Zero-Shot Learning

cyvius96/adgpm CVPR 2019

Graph convolutional neural networks have recently shown great potential for the task of zero-shot learning.

Polarity Loss for Zero-shot Object Detection

KennithLi/Awesome-Zero-Shot-Object-Detection 22 Nov 2018

This setting gives rise to the need for correct alignment between visual and semantic concepts, so that the unseen objects can be identified using only their semantic attributes.

75 Languages, 1 Model: Parsing Universal Dependencies Universally

hyperparticle/udify IJCNLP 2019

We present UDify, a multilingual multi-task model capable of accurately predicting universal part-of-speech, morphological features, lemmas, and dependency trees simultaneously for all 124 Universal Dependencies treebanks across 75 languages.

Deep Learning Models for Multilingual Hate Speech Detection

punyajoy/DE-LIMIT 14 Apr 2020

Hate speech detection is a challenging problem with most of the datasets available in only one language: English.