Zero-Shot Learning

564 papers with code • 18 benchmarks • 29 datasets

Zero-shot learning (ZSL) is a model's ability to detect classes never seen during training. The condition is that the classes are not known during supervised learning.

Earlier work in zero-shot learning use attributes in a two-step approach to infer unknown classes. In the computer vision context, more recent advances learn mappings from image feature space to semantic space. Other approaches learn non-linear multimodal embeddings. In the modern NLP context, language models can be evaluated on downstream tasks without fine tuning.

Benchmark datasets for zero-shot learning include aPY, AwA, and CUB, among others.

( Image credit: Prototypical Networks for Few shot Learning in PyTorch )

Further readings:

Libraries

Use these libraries to find Zero-Shot Learning models and implementations

Most implemented papers

Event Extraction by Answering (Almost) Natural Questions

xinyadu/eeqa EMNLP 2020

The problem of event extraction requires detecting the event trigger and extracting its corresponding arguments.

Transforming task representations to perform novel tasks

lampinen/HoMM 8 May 2020

We demonstrate the effectiveness of this framework across a wide variety of tasks and computational paradigms, ranging from regression to image classification and reinforcement learning.

Zero-Shot Learning with Common Sense Knowledge Graphs

BatsResearch/zsl-kg 18 Jun 2020

Zero-shot learning relies on semantic class representations such as hand-engineered attributes or learned embeddings to predict classes without any labeled examples.

Class Normalization for (Continual)? Generalized Zero-Shot Learning

universome/czsl 19 Jun 2020

Normalization techniques have proved to be a crucial ingredient of successful training in a traditional supervised learning regime.

AutoQA: From Databases To QA Semantic Parsers With Only Synthetic Training Data

stanford-oval/genie-toolkit EMNLP 2020

To demonstrate the generality of AutoQA, we also apply it to the Overnight dataset.

Contrastive Embedding for Generalized Zero-Shot Learning

Hanzy1996/CE-GZSL CVPR 2021

To tackle this issue, we propose to integrate the generation model with the embedding model, yielding a hybrid GZSL framework.

Visually Grounded Reasoning across Languages and Cultures

e-bug/volta EMNLP 2021

The design of widespread vision-and-language datasets and pre-trained encoders directly adopts, or draws inspiration from, the concepts and images of ImageNet.

Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm

sense-gvt/declip ICLR 2022

Recently, large-scale Contrastive Language-Image Pre-training (CLIP) has attracted unprecedented attention for its impressive zero-shot recognition ability and excellent transferability to downstream tasks.

YourTTS: Towards Zero-Shot Multi-Speaker TTS and Zero-Shot Voice Conversion for everyone

coqui-ai/TTS 4 Dec 2021

YourTTS brings the power of a multilingual approach to the task of zero-shot multi-speaker TTS.

ZeroGen: Efficient Zero-shot Learning via Dataset Generation

HKUNLP/zerogen 16 Feb 2022

There is a growing interest in dataset generation recently due to the superior generative capacity of large pre-trained language models (PLMs).