1 code implementation • ICLR 2022 • Wengong Jin, Jeremy Wohlwend, Regina Barzilay, Tommi Jaakkola
In this paper, we propose a generative model to automatically design the CDRs of antibodies with enhanced binding specificity or neutralization capabilities.
2 code implementations • EMNLP 2020 • Alexander Lin, Jeremy Wohlwend, Howard Chen, Tao Lei
The performance of autoregressive models on natural language generation tasks has dramatically improved due to the adoption of deep, self-attentive architectures.
Ranked #21 on Machine Translation on IWSLT2014 German-English
no code implementations • 21 May 2020 • Jing Pan, Joshua Shapiro, Jeremy Wohlwend, Kyu J. Han, Tao Lei, Tao Ma
In this paper we present state-of-the-art (SOTA) performance on the LibriSpeech corpus with two novel neural network architectures, a multistream CNN for acoustic modeling and a self-attentive simple recurrent unit (SRU) for language modeling.
Ranked #7 on Speech Recognition on LibriSpeech test-clean
1 code implementation • WS 2019 • Jeremy Wohlwend, Ethan R. Elenberg, Samuel Altschul, Shawn Henry, Tao Lei
However, in many real-world applications the label set is frequently changing.
2 code implementations • EMNLP 2020 • Ziheng Wang, Jeremy Wohlwend, Tao Lei
Large language models have recently achieved state of the art performance across a wide variety of natural language tasks.
no code implementations • ACL 2019 • Jeremy Wohlwend, Nicholas Matthews, Ivan Itzcovich
Flamb{\'e} is a machine learning experimentation framework built to accelerate the entire research life cycle.
no code implementations • WS 2019 • Kyle Swanson, Lili Yu, Christopher Fox, Jeremy Wohlwend, Tao Lei
Response suggestion is an important task for building human-computer conversation systems.