In-Context Learning

470 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find In-Context Learning models and implementations
2 papers
18,692
2 papers
7,297
2 papers
6,636
See all 8 libraries.

Most implemented papers

What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers

kakaobrain/kogpt EMNLP 2021

GPT-3 shows remarkable in-context learning ability of large-scale language models (LMs) trained on hundreds of billion scale data.

MetaICL: Learning to Learn In Context

facebookresearch/metaicl NAACL 2022

We introduce MetaICL (Meta-training for In-Context Learning), a new meta-training framework for few-shot learning where a pretrained language model is tuned to do in-context learning on a large set of training tasks.

Learning To Retrieve Prompts for In-Context Learning

ohadrubin/epr NAACL 2022

In-context learning is a recent paradigm in natural language understanding, where a large pre-trained language model (LM) observes a test instance and a few training examples as its input, and directly decodes the output without any update to its parameters.

Black-Box Tuning for Language-Model-as-a-Service

txsun1997/black-box-tuning 10 Jan 2022

In such a scenario, which we call Language-Model-as-a-Service (LMaaS), the gradients of PTMs are usually unavailable.

UL2: Unifying Language Learning Paradigms

google-research/google-research 10 May 2022

Our model also achieve strong results at in-context learning, outperforming 175B GPT-3 on zero-shot SuperGLUE and tripling the performance of T5-XXL on one-shot summarization.

Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning

r-three/t-few 11 May 2022

ICL incurs substantial computational, memory, and storage costs because it involves processing all of the training examples every time a prediction is made.

What Can Transformers Learn In-Context? A Case Study of Simple Function Classes

dtsip/in-context-learning 1 Aug 2022

To make progress towards understanding in-context learning, we consider the well-defined problem of training a model to in-context learn a function class (e. g., linear functions): that is, given data derived from some functions in the class, can we train a model to in-context learn "most" functions from this class?

Don't Generate, Discriminate: A Proposal for Grounding Language Models to Real-World Environments

dki-lab/pangu 19 Dec 2022

Most existing work for grounded language understanding uses LMs to directly generate plans that can be executed in the environment to achieve the desired effects.

Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations

alrope123/z-icl 19 Dec 2022

Although large language models can be prompted for both zero- and few-shot learning, performance drops significantly when no demonstrations are available.

Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP

stanfordnlp/dsp 28 Dec 2022

Retrieval-augmented in-context learning has emerged as a powerful approach for addressing knowledge-intensive tasks using frozen language models (LM) and retrieval models (RM).