( Image credit: Prototypical Networks for Few shot Learning in PyTorch )
You can view blog posts such as this to get a high-level understanding:
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
The zero-shot paradigm exploits vector-based word representations extracted from text corpora with unsupervised methods to learn general mapping functions from other feature spaces onto word space, where the words associated to the nearest neighbours of the mapped vectors are used as their linguistic labels.
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class.
However, applying GPT-3 to address Chinese NLP tasks is still challenging, as the training corpus of GPT-3 is primarily English, and the parameters are not publicly available.
Once trained, a RN is able to classify images of new classes by computing relation scores between query images and the few examples of each new class without further updating the network.
Given a learned knowledge graph (KG), our approach takes as input semantic embeddings for each node (representing visual category).
Given semantic descriptions of object classes, zero-shot learning aims to accurately recognize objects of the unseen classes, from which no examples are available at the training stage, by associating them to the seen classes, from which labeled examples are provided.
Ranked #1 on Few-Shot Image Classification on AWA - 0-Shot
In other cases the semantic embedding space is established by an independent natural language processing task, and then the image transformation into that space is learned in a second stage.
Ranked #2 on Few-Shot Image Classification on ImageNet - 0-Shot
State-of-the-art methods for zero-shot visual recognition formulate learning as a joint embedding problem of images and side information.
Graph convolutional neural networks have recently shown great potential for the task of zero-shot learning.