Search Results for author: Scott Yih

Found 5 papers, 2 papers with code

In-Context Pretraining: Language Modeling Beyond Document Boundaries

no code implementations16 Oct 2023 Weijia Shi, Sewon Min, Maria Lomeli, Chunting Zhou, Margaret Li, Gergely Szilvasy, Rich James, Xi Victoria Lin, Noah A. Smith, Luke Zettlemoyer, Scott Yih, Mike Lewis

Large language models (LMs) are currently trained to predict tokens given document prefixes, enabling them to directly perform long-form generation and prompting-style tasks which can be reduced to document completion.

In-Context Learning Language Modelling +1

RA-DIT: Retrieval-Augmented Dual Instruction Tuning

no code implementations2 Oct 2023 Xi Victoria Lin, Xilun Chen, Mingda Chen, Weijia Shi, Maria Lomeli, Rich James, Pedro Rodriguez, Jacob Kahn, Gergely Szilvasy, Mike Lewis, Luke Zettlemoyer, Scott Yih

Retrieval-augmented language models (RALMs) improve performance by accessing long-tail and up-to-date knowledge from external data stores, but are challenging to build.

Few-Shot Learning Open-Domain Question Answering +1

Reimagining Retrieval Augmented Language Models for Answering Queries

no code implementations1 Jun 2023 Wang-Chiew Tan, Yuliang Li, Pedro Rodriguez, Richard James, Xi Victoria Lin, Alon Halevy, Scott Yih

We present a reality check on large language models and inspect the promise of retrieval augmented language models in comparison.

Question Answering Retrieval

BiT: Robustly Binarized Multi-distilled Transformer

2 code implementations25 May 2022 Zechun Liu, Barlas Oguz, Aasish Pappu, Lin Xiao, Scott Yih, Meng Li, Raghuraman Krishnamoorthi, Yashar Mehdad

Modern pre-trained transformers have rapidly advanced the state-of-the-art in machine learning, but have also grown in parameters and computational complexity, making them increasingly difficult to deploy in resource-constrained environments.

Binarization

Cannot find the paper you are looking for? You can Submit a new open access paper.