no code implementations • Findings (EMNLP) 2021 • Dhanasekar Sundararaman, Henry Tsai, Kuang-Huei Lee, Iulia Turc, Lawrence Carin
It has been shown that training multi-task models with auxiliary tasks can improve the target task quality through cross-task transfer.
4 code implementations • 7 Oct 2022 • Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova
Visually-situated language is ubiquitous -- sources range from textbooks with diagrams to web pages with images and tables, to mobile apps with buttons and forms.
Ranked #18 on
Visual Question Answering (VQA)
on InfographicVQA
1 code implementation • 23 Dec 2021 • Hannah Rashkin, Vitaly Nikolaev, Matthew Lamm, Lora Aroyo, Michael Collins, Dipanjan Das, Slav Petrov, Gaurav Singh Tomar, Iulia Turc, David Reitter
With recent improvements in natural language generation (NLG) models for various applications, it has become imperative to have the means to identify and evaluate whether NLG output is only sharing verifiable information about the external world.
no code implementations • 30 Jun 2021 • Iulia Turc, Kenton Lee, Jacob Eisenstein, Ming-Wei Chang, Kristina Toutanova
Zero-shot cross-lingual transfer is emerging as a practical solution: pre-trained models later fine-tuned on one transfer language exhibit surprising performance when tested on many target languages.
3 code implementations • ICLR 2022 • Thibault Sellam, Steve Yadlowsky, Jason Wei, Naomi Saphra, Alexander D'Amour, Tal Linzen, Jasmijn Bastings, Iulia Turc, Jacob Eisenstein, Dipanjan Das, Ian Tenney, Ellie Pavlick
Experiments with pre-trained models such as BERT are often based on a single checkpoint.
6 code implementations • 11 Mar 2021 • Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting
Pipelined NLP systems have largely been superseded by end-to-end neural modeling, yet nearly all commonly-used models still require an explicit tokenization step.
no code implementations • EMNLP 2020 • Gabriel Ilharco, Cesar Ilharco, Iulia Turc, Tim Dettmers, Felipe Ferreira, Kenton Lee
Scale has played a central role in the rapid progress natural language processing has enjoyed in recent years.
40 code implementations • ICLR 2020 • Iulia Turc, Ming-Wei Chang, Kenton Lee, Kristina Toutanova
Recent developments in natural language representations have been accompanied by large and expensive models that leverage vast amounts of general-domain text through self-supervised pre-training.