Few-Shot Text Classification

42 papers with code • 8 benchmarks • 4 datasets

Few-shot Text Classification predicts the semantic label of a given text with a handful of supporting instances 1

Libraries

Use these libraries to find Few-Shot Text Classification models and implementations

Most implemented papers

Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference

timoschick/pet 21 Jan 2020

Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language model with "task descriptions" in natural language (e. g., Radford et al., 2019).

Induction Networks for Few-Shot Text Classification

zhongyuchen/few-shot-text-classification IJCNLP 2019

Therefore, we should be able to learn a general representation of each class in the support set and then compare it to new queries.

Few-shot Text Classification with Distributional Signatures

YujiaBao/Distributional-Signatures ICLR 2020

In this paper, we explore meta-learning for few-shot text classification.

Automatically Identifying Words That Can Serve as Labels for Few-Shot Text Classification

timoschick/pet COLING 2020

A recent approach for few-shot text classification is to convert textual inputs to cloze questions that contain some form of task description, process them with a pretrained language model and map the predicted words to labels.

Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification

thunlp/knowledgeableprompttuning ACL 2022

Tuning pre-trained language models (PLMs) with task-specific prompts has been a promising approach for text classification.

Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning

r-three/t-few 11 May 2022

ICL incurs substantial computational, memory, and storage costs because it involves processing all of the training examples every time a prediction is made.

Decoupling Knowledge from Memorization: Retrieval-augmented Prompt Learning

zjunlp/promptkg 29 May 2022

Specifically, vanilla prompt learning may struggle to utilize atypical instances by rote during fully-supervised training or overfit shallow patterns with low-shot data.

Few-Shot Text Classification with Pre-Trained Word Embeddings and a Human in the Loop

katbailey/few-shot-text-classification 5 Apr 2018

Our work aims to make it possible to classify an entire corpus of unlabeled documents using a human-in-the-loop approach, where the content owner manually classifies just one or two documents per category and the rest can be automatically classified.

A Neural Few-Shot Text Classification Reality Check

tdopierre/FewShotText EACL 2021

Additionally, some models used in Computer Vision are yet to be tested in NLP applications.