Few-Shot Text Classification

42 papers with code • 8 benchmarks • 4 datasets

Few-shot Text Classification predicts the semantic label of a given text with a handful of supporting instances 1

Libraries

Use these libraries to find Few-Shot Text Classification models and implementations

Most implemented papers

Few-Shot Text Classification with Triplet Networks, Data Augmentation, and Curriculum Learning

jasonwei20/triplet-loss NAACL 2021

Few-shot text classification is a fundamental NLP task in which a model aims to classify text into a large number of categories, given only a few training examples per category.

Exploiting Cloze-Questions for Few-Shot Text Classification and Natural Language Inference

timoschick/pet EACL 2021

Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language model with {``}task descriptions{''} in natural language (e. g., Radford et al., 2019).

Incremental Few-shot Text Classification with Multi-round New Classes: Formulation, Dataset and System

congyingxia/IncrementalFSTC NAACL 2021

Two major challenges exist in this new task: (i) For the learning process, the system should incrementally learn new classes round by round without re-training on the examples of preceding classes; (ii) For the performance, the system should perform well on new classes without much loss on preceding classes.

Meta-Learning Adversarial Domain Adaptation Network for Few-Shot Text Classification

hccngu/MLADA Findings (ACL) 2021

Meta-learning has emerged as a trending technique to tackle few-shot text classification and achieved state-of-the-art performance.

Distinct Label Representations for Few-Shot Text Classification

21335732529sky/difference_extractor ACL 2021

Few-shot text classification aims to classify inputs whose label has only a few examples.

Noisy Channel Language Model Prompting for Few-Shot Text Classification

shmsw25/Channel-LM-Prompting ACL 2022

We introduce a noisy channel approach for language model prompting in few-shot text classification.

ProtoInfoMax: Prototypical Networks with Mutual Information Maximization for Out-of-Domain Detection

inimah/protoinfomax Findings (EMNLP) 2021

The ability to detect Out-of-Domain (OOD) inputs has been a critical requirement in many real-world NLP applications.

RAFT: A Real-World Few-Shot Text Classification Benchmark

oughtinc/raft-baselines 28 Sep 2021

Will models soon solve classification tasks that have so far been reserved for human research assistants?

Good Examples Make A Faster Learner: Simple Demonstration-based Learning for Low-resource NER

ink-usc/fewner ACL 2022

We also find that good demonstration can save many labeled examples and consistency in demonstration contributes to better performance.

Grad2Task: Improved Few-shot Text Classification Using Gradients for Task Representation

jixuan-wang/grad2task NeurIPS 2021

Large pretrained language models (LMs) like BERT have improved performance in many disparate natural language processing (NLP) tasks.