Few-Shot Text Classification
42 papers with code • 8 benchmarks • 4 datasets
Few-shot Text Classification predicts the semantic label of a given text with a handful of supporting instances 1
Libraries
Use these libraries to find Few-Shot Text Classification models and implementationsLatest papers with no code
Self-Evolution Learning for Mixup: Enhance Data Augmentation on Few-Shot Text Classification Tasks
However, most of the mixup methods do not consider the varying degree of learning difficulty in different stages of training and generate new samples with one hot labels, resulting in the model over confidence.
Prompt as Triggers for Backdoor Attack: Examining the Vulnerability in Language Models
Our method does not require external triggers and ensures correct labeling of poisoned samples, improving the stealthy nature of the backdoor attack.
Boosting Few-Shot Text Classification via Distribution Estimation
Distribution estimation has been demonstrated as one of the most effective approaches in dealing with few-shot image classification, as the low-level patterns and underlying representations can be easily transferred across different tasks in computer vision domain.
Mask-guided BERT for Few Shot Text Classification
The main challenge of FSL is the difficulty of training robust models on small amounts of samples, which frequently leads to overfitting.
Improving Few-Shot Performance of Language Models via Nearest Neighbor Calibration
However, the performance of in-context learning is susceptible to the choice of prompt format, training examples and the ordering of the training examples.
Understanding BLOOM: An empirical study on diverse NLP tasks
We view the landscape of large language models (LLMs) through the lens of the recently released BLOOM model to understand the performance of BLOOM and other decoder-only LLMs compared to BERT-style encoder-only models.
Disentangling Task Relations for Few-shot Text Classification via Self-Supervised Hierarchical Task Clustering
However, most prior works assume that all the tasks are sampled from a single data source, which cannot adapt to real-world scenarios where tasks are heterogeneous and lie in different distributions.
STPrompt: Semantic-guided and Task-driven prompts for Effective Few-shot Classification
The effectiveness of prompt learning has been demonstrated in different pre-trained language models.
Discriminative Language Model as Semantic Consistency Scorer for Prompt-based Few-Shot Text Classification
This paper proposes a novel prompt-based finetuning method (called DLM-SCS) for few-shot text classification by utilizing the discriminative language model ELECTRA that is pretrained to distinguish whether a token is original or generated.
Meta-learning Pathologies from Radiology Reports using Variance Aware Prototypical Networks
Large pretrained Transformer-based language models like BERT and GPT have changed the landscape of Natural Language Processing (NLP).