Few-Shot NLI
6 papers with code • 3 benchmarks • 2 datasets
Most implemented papers
Zero-Shot Cross-Lingual Transfer with Meta Learning
We show that this challenging setup can be approached using meta-learning, where, in addition to training a source language model, another model learns to select which training instances are the most beneficial to the first.
Language Models for Lexical Inference in Context
Lexical inference in context (LIiC) is the task of recognizing textual entailment between two very similar sentences, i. e., sentences that only differ in one expression.
Continuous Entailment Patterns for Lexical Inference in Context
If we allow for tokens outside the PLM's vocabulary, patterns can be adapted more flexibly to a PLM's idiosyncrasies.
STraTA: Self-Training with Task Augmentation for Better Few-shot Learning
Despite their recent successes in tackling many NLP tasks, large-scale pre-trained language models do not perform as well in few-shot settings where only a handful of training examples are available.
Investigating the Effect of Natural Language Explanations on Out-of-Distribution Generalization in Few-shot NLI
Although neural models have shown strong performance in datasets such as SNLI, they lack the ability to generalize out-of-distribution (OOD).
Instructive Decoding: Instruction-Tuned Large Language Models are Self-Refiner from Noisy Instructions
Notably, utilizing 'opposite' as the noisy instruction in ID, which exhibits the maximum divergence from the original instruction, consistently produces the most significant performance gains across multiple models and tasks.