Few-Shot Text Classification

42 papers with code • 8 benchmarks • 4 datasets

Few-shot Text Classification predicts the semantic label of a given text with a handful of supporting instances 1

Libraries

Use these libraries to find Few-Shot Text Classification models and implementations

Latest papers with no code

Self-Evolution Learning for Mixup: Enhance Data Augmentation on Few-Shot Text Classification Tasks

no code yet • 22 May 2023

However, most of the mixup methods do not consider the varying degree of learning difficulty in different stages of training and generate new samples with one hot labels, resulting in the model over confidence.

Prompt as Triggers for Backdoor Attack: Examining the Vulnerability in Language Models

no code yet • 2 May 2023

Our method does not require external triggers and ensures correct labeling of poisoned samples, improving the stealthy nature of the backdoor attack.

Boosting Few-Shot Text Classification via Distribution Estimation

no code yet • 26 Mar 2023

Distribution estimation has been demonstrated as one of the most effective approaches in dealing with few-shot image classification, as the low-level patterns and underlying representations can be easily transferred across different tasks in computer vision domain.

Mask-guided BERT for Few Shot Text Classification

no code yet • 21 Feb 2023

The main challenge of FSL is the difficulty of training robust models on small amounts of samples, which frequently leads to overfitting.

Improving Few-Shot Performance of Language Models via Nearest Neighbor Calibration

no code yet • 5 Dec 2022

However, the performance of in-context learning is susceptible to the choice of prompt format, training examples and the ordering of the training examples.

Understanding BLOOM: An empirical study on diverse NLP tasks

no code yet • 27 Nov 2022

We view the landscape of large language models (LLMs) through the lens of the recently released BLOOM model to understand the performance of BLOOM and other decoder-only LLMs compared to BERT-style encoder-only models.

Disentangling Task Relations for Few-shot Text Classification via Self-Supervised Hierarchical Task Clustering

no code yet • 16 Nov 2022

However, most prior works assume that all the tasks are sampled from a single data source, which cannot adapt to real-world scenarios where tasks are heterogeneous and lie in different distributions.

STPrompt: Semantic-guided and Task-driven prompts for Effective Few-shot Classification

no code yet • 29 Oct 2022

The effectiveness of prompt learning has been demonstrated in different pre-trained language models.

Discriminative Language Model as Semantic Consistency Scorer for Prompt-based Few-Shot Text Classification

no code yet • 23 Oct 2022

This paper proposes a novel prompt-based finetuning method (called DLM-SCS) for few-shot text classification by utilizing the discriminative language model ELECTRA that is pretrained to distinguish whether a token is original or generated.

Meta-learning Pathologies from Radiology Reports using Variance Aware Prototypical Networks

no code yet • 22 Oct 2022

Large pretrained Transformer-based language models like BERT and GPT have changed the landscape of Natural Language Processing (NLP).