Search Results for author: Iulia Turc

Found 8 papers, 5 papers with code

CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation

4 code implementations11 Mar 2021 Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting

Pipelined NLP systems have largely been superseded by end-to-end neural modeling, yet nearly all commonly-used models still require an explicit tokenization step.

Inductive Bias

Well-Read Students Learn Better: On the Importance of Pre-training Compact Models

40 code implementations ICLR 2020 Iulia Turc, Ming-Wei Chang, Kenton Lee, Kristina Toutanova

Recent developments in natural language representations have been accompanied by large and expensive models that leverage vast amounts of general-domain text through self-supervised pre-training.

Knowledge Distillation Language Modelling +2

Measuring Attribution in Natural Language Generation Models

1 code implementation23 Dec 2021 Hannah Rashkin, Vitaly Nikolaev, Matthew Lamm, Lora Aroyo, Michael Collins, Dipanjan Das, Slav Petrov, Gaurav Singh Tomar, Iulia Turc, David Reitter

With recent improvements in natural language generation (NLG) models for various applications, it has become imperative to have the means to identify and evaluate whether NLG output is only sharing verifiable information about the external world.

Text Generation

Revisiting the Primacy of English in Zero-shot Cross-lingual Transfer

no code implementations30 Jun 2021 Iulia Turc, Kenton Lee, Jacob Eisenstein, Ming-Wei Chang, Kristina Toutanova

Zero-shot cross-lingual transfer is emerging as a practical solution: pre-trained models later fine-tuned on one transfer language exhibit surprising performance when tested on many target languages.

Question Answering Zero-Shot Cross-Lingual Transfer

Cannot find the paper you are looking for? You can Submit a new open access paper.