no code implementations • 16 Jun 2024 • Garrett Tanzer, Maximus Shengelia, Ken Harrenstien, David Uthus
Then as a case study, we perform the first human baseline for sign language translation that actually substitutes a human into the machine learning task framing, rather than provide the human with the entire document as context.
no code implementations • 15 Nov 2023 • Cicero Nogueira dos santos, James Lee-Thorp, Isaac Noble, Chung-Ching Chang, David Uthus
We demonstrate that MoWE performs significantly better than the T5 family of models with similar number of FLOPs in a variety of NLP tasks.
1 code implementation • 18 May 2023 • David Uthus, Santiago Ontañón, Joshua Ainslie, Mandy Guo
We present our work on developing a multilingual, efficient text-to-text transformer that is suitable for handling long inputs.
no code implementations • 17 Mar 2023 • Joshua Ainslie, Tao Lei, Michiel de Jong, Santiago Ontañón, Siddhartha Brahma, Yury Zemlyanskiy, David Uthus, Mandy Guo, James Lee-Thorp, Yi Tay, Yun-Hsuan Sung, Sumit Sanghai
Many natural language processing tasks benefit from long inputs, but processing long documents with Transformers is expensive -- not only due to quadratic attention complexity but also from applying feedforward and projection layers to every token.
Ranked #1 on Long-range modeling on SCROLLS
no code implementations • 17 Dec 2022 • David Uthus, Jianmo Ni
RISE is first trained as a retrieval task using a dual-encoder retrieval setup, and can then be subsequently utilized for evaluating a generated summary given an input document, without gold reference summaries.
3 code implementations • Findings (NAACL) 2022 • Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang
Recent work has shown that either (1) increasing the input length or (2) increasing model size can improve the performance of Transformer-based neural models.
Ranked #1 on Text Summarization on BigPatent
no code implementations • NAACL (ACL) 2022 • David Uthus, Maria Voitovich, R. J. Mical
We describe Verse by Verse, our experiment in augmenting the creative process of writing poetry with an AI.
1 code implementation • GeBNLP (COLING) 2020 • Emily Sheng, David Uthus
There is a growing collection of work analyzing and mitigating societal biases in language understanding, generation, and retrieval tasks, though examining biases in creative tasks remains underexplored.
1 code implementation • ACL 2021 • Parker Riley, Noah Constant, Mandy Guo, Girish Kumar, David Uthus, Zarana Parekh
Unlike previous approaches requiring style-labeled training data, our method makes use of readily-available unlabeled text by relying on the implicit connection in style between adjacent sentences, and uses labeled data only at inference time.
no code implementations • 28 Sep 2020 • Parker Riley, Noah Constant, Mandy Guo, Girish Kumar, David Uthus, Zarana Parekh
We present a novel approach to the challenging problem of label-free text style transfer.