Natural Language Inference (Zero-Shot)

3 papers with code • 1 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Datasets


The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning

kaistai/cot-collection 23 May 2023

Furthermore, we show that instruction tuning with CoT Collection allows LMs to possess stronger few-shot learning capabilities on 4 domain-specific tasks, resulting in an improvement of +2. 24% (Flan-T5 3B) and +2. 37% (Flan-T5 11B), even outperforming ChatGPT utilizing demonstrations until the max length by a +13. 98% margin.

188
23 May 2023

Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners

seonghyeonye/flipped-learning 6 Oct 2022

Meta-training, which fine-tunes the language model (LM) on various downstream tasks by maximizing the likelihood of the target label given the task instruction and input instance, has improved the zero-shot task generalization performance.

109
06 Oct 2022

PanGu-$α$: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel Computation

mindspore-ai/models 26 Apr 2021

To enhance the generalization ability of PanGu-$\alpha$, we collect 1. 1TB high-quality Chinese data from a wide range of domains to pretrain the model.

219
26 Apr 2021