Search Results for author: Dylan Zinsley

Found 1 papers, 1 papers with code

Simple linear attention language models balance the recall-throughput tradeoff

1 code implementation28 Feb 2024 Simran Arora, Sabri Eyuboglu, Michael Zhang, Aman Timalsina, Silas Alberti, Dylan Zinsley, James Zou, Atri Rudra, Christopher Ré

In this work, we explore whether we can improve language model efficiency (e. g. by reducing memory consumption) without compromising on recall.

Language Modelling Text Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.