no code implementations • 2 Jan 2024 • Jiaming Luo, Colin Cherry, George Foster
We conduct a large-scale fine-grained comparative analysis of machine translations (MT) against human translations (HT) through the lens of morphosyntactic divergence.
no code implementations • 20 Dec 2022 • Kundan Krishna, Yao Zhao, Jie Ren, Balaji Lakshminarayanan, Jiaming Luo, Mohammad Saleh, Peter J. Liu
We present a large empirical study quantifying the sometimes severe loss in performance (up to 12 ROUGE-1 points) from different types of input noise for a range of datasets and model sizes.
no code implementations • 16 Nov 2022 • David Vilar, Markus Freitag, Colin Cherry, Jiaming Luo, Viresh Ratnakar, George Foster
Large language models (LLMs) that have been trained on multilingual but not parallel text exhibit a remarkable ability to translate between languages.
no code implementations • 30 Sep 2022 • Jie Ren, Jiaming Luo, Yao Zhao, Kundan Krishna, Mohammad Saleh, Balaji Lakshminarayanan, Peter J. Liu
Furthermore, the space of potential low-quality outputs is larger as arbitrary text can be generated and it is important to know when to trust the generated output.
Abstractive Text Summarization Out-of-Distribution Detection +1
1 code implementation • 21 Oct 2020 • Jiaming Luo, Frederik Hartmann, Enrico Santus, Yuan Cao, Regina Barzilay
We evaluate the model on both deciphered languages (Gothic, Ugaritic) and an undeciphered one (Iberian).
1 code implementation • ACL 2019 • Jiaming Luo, Yuan Cao, Regina Barzilay
In this paper we propose a novel neural approach for automatic decipherment of lost languages.
no code implementations • 27 Sep 2018 • Jiaming Luo, Yuan Cao, Yonghui Wu
The vast majority of neural models in Natural Language Processing adopt a form of structureless distributed representations.
no code implementations • TACL 2017 • Jiaming Luo, Karthik Narasimhan, Regina Barzilay
This paper focuses on unsupervised modeling of morphological families, collectively comprising a forest over the language vocabulary.