Zero-Shot Learning

565 papers with code • 18 benchmarks • 29 datasets

Zero-shot learning (ZSL) is a model's ability to detect classes never seen during training. The condition is that the classes are not known during supervised learning.

Earlier work in zero-shot learning use attributes in a two-step approach to infer unknown classes. In the computer vision context, more recent advances learn mappings from image feature space to semantic space. Other approaches learn non-linear multimodal embeddings. In the modern NLP context, language models can be evaluated on downstream tasks without fine tuning.

Benchmark datasets for zero-shot learning include aPY, AwA, and CUB, among others.

( Image credit: Prototypical Networks for Few shot Learning in PyTorch )

Further readings:

Libraries

Use these libraries to find Zero-Shot Learning models and implementations

Meta-Prompting for Automating Zero-shot Visual Recognition with LLMs

jmiemirza/meta-prompting 18 Mar 2024

Prompt ensembling of Large Language Model (LLM) generated category-specific prompts has emerged as an effective method to enhance zero-shot recognition ability of Vision-Language Models (VLMs).

8
18 Mar 2024

CoLeCLIP: Open-Domain Continual Learning via Joint Task Prompt and Vocabulary Learning

YukunLi99/CoLeCLIP 15 Mar 2024

Large pre-trained VLMs like CLIP have demonstrated superior zero-shot recognition ability, and a number of recent studies leverage this ability to mitigate catastrophic forgetting in CL, but they focus on closed-set CL in a single domain dataset.

3
15 Mar 2024

OpenGraph: Open-Vocabulary Hierarchical 3D Graph Representation in Large-Scale Outdoor Environments

bit-dyn/opengraph 14 Mar 2024

In this work, we propose OpenGraph, the first open-vocabulary hierarchical graph representation designed for large-scale outdoor environments.

45
14 Mar 2024

MolBind: Multimodal Alignment of Language, Molecules, and Proteins

tengxiao1/molbind 13 Mar 2024

Recent advancements in biology and chemistry have leveraged multi-modal learning, integrating molecules and their natural language descriptions to enhance drug discovery.

3
13 Mar 2024

Split to Merge: Unifying Separated Modalities for Unsupervised Domain Adaptation

tl-uestc/unimos 11 Mar 2024

In this work, we introduce a Unified Modality Separation (UniMoS) framework for unsupervised domain adaptation.

10
11 Mar 2024

Personalized LoRA for Human-Centered Text Understanding

yoyo-yun/plora 10 Mar 2024

Effectively and efficiently adapting a pre-trained language model (PLM) for human-centered text understanding (HCTU) is challenging since user tokens are million-level in most personalized applications and do not have concrete explicit semantics.

4
10 Mar 2024

X-Shot: A Unified System to Handle Frequent, Few-shot and Zero-shot Learning Simultaneously in Classification

xhz0809/x-shot 6 Mar 2024

In recent years, few-shot and zero-shot learning, which learn to predict labels with limited annotated instances, have garnered significant attention.

0
06 Mar 2024

On the use of Silver Standard Data for Zero-shot Classification Tasks in Information Extraction

wjw136/clean_lave 28 Feb 2024

Recent zero-shot classification methods converted the task to other NLP tasks (e. g., textual entailment) and used off-the-shelf models of these NLP tasks to directly perform inference on the test data without using a large amount of IE annotation data.

0
28 Feb 2024

CARZero: Cross-Attention Alignment for Radiology Zero-Shot Classification

laihaoran/carzero 27 Feb 2024

The advancement of Zero-Shot Learning in the medical domain has been driven forward by using pre-trained models on large-scale image-text pairs, focusing on image-text alignment.

5
27 Feb 2024

Can GNN be Good Adapter for LLMs?

zjunet/graphadapter 20 Feb 2024

In terms of efficiency, the GNN adapter introduces only a few trainable parameters and can be trained with low computation costs.

3
20 Feb 2024