In-Context Learning
856 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in In-Context Learning
Libraries
Use these libraries to find In-Context Learning models and implementationsMost implemented papers
Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers
In addition, we find Vall-E could preserve the speaker's emotion and acoustic environment of the acoustic prompt in synthesis.
TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second
We present TabPFN, a trained Transformer that can do supervised classification for small tabular datasets in less than a second, needs no hyperparameter tuning and is competitive with state-of-the-art classification methods.
From system models to class models: An in-context learning paradigm
Is it possible to understand the intricacies of a dynamical system not solely from its input/output pattern, but also by observing the behavior of other systems within the same class?
PanGu-$α$: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel Computation
To enhance the generalization ability of PanGu-$\alpha$, we collect 1. 1TB high-quality Chinese data from a wide range of domains to pretrain the model.
Data Distributional Properties Drive Emergent In-Context Learning in Transformers
In further experiments, we found that naturalistic data distributions were only able to elicit in-context learning in transformers, and not in recurrent models.
Large Language Models Are Human-Level Prompt Engineers
By conditioning on natural language instructions, large language models (LLMs) have displayed impressive capabilities as general-purpose computers.
HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide Resolution
Leveraging Hyena's new long-range capabilities, we present HyenaDNA, a genomic foundation model pretrained on the human reference genome with context lengths of up to 1 million tokens at the single nucleotide-level - an up to 500x increase over previous dense attention-based models.
OpenICL: An Open-Source Framework for In-context Learning
However, the implementation of ICL is sophisticated due to the diverse retrieval and inference methods involved, as well as the varying pre-processing requirements for different models, datasets, and tasks.
SegGPT: Segmenting Everything In Context
We unify various segmentation tasks into a generalist in-context learning framework that accommodates different kinds of segmentation data by transforming them into the same format of images.
Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning
In-context learning (ICL) emerges as a promising capability of large language models (LLMs) by providing them with demonstration examples to perform diverse tasks.