no code implementations • ACL (NALOMA, IWCS) 2021 • Aaron Traylor, Ellie Pavlick, Roman Feiman
In modern natural language processing pipelines, it is common practice to “pretrain” a generative language model on a large corpus of text, and then to “finetune” the created representations by continuing to train them on a discriminative textual inference task.
no code implementations • ACL 2021 • Aaron Traylor, Roman Feiman, Ellie Pavlick
A current open question in natural language processing is to what extent language models, which are trained with access only to the form of language, are able to capture the meaning of language.