no code implementations • WS 2018 • Alexandra Birch, Andrew Finch, Minh-Thang Luong, Graham Neubig, Yusuke Oda
This document describes the findings of the Second Workshop on Neural Machine Translation and Generation, held in concert with the annual conference of the Association for Computational Linguistics (ACL 2018).
no code implementations • ACL 2018 • Chaitanya Malaviya, Matthew R. Gormley, Graham Neubig
Morphological analysis involves predicting the syntactic traits of a word (e. g. {POS: Noun, Case: Acc, Gender: Fem}).
no code implementations • WS 2018 • Yuta Nishimura, Katsuhito Sudoh, Graham Neubig, Satoshi Nakamura
This study focuses on the use of incomplete multilingual corpora in multi-encoder NMT and mixture of NMT experts and examines a very simple implementation where missing source translations are replaced by a special symbol <NULL>.
no code implementations • 23 May 2018 • Pengcheng Yin, Bowen Deng, Edgar Chen, Bogdan Vasilescu, Graham Neubig
For tasks like code synthesis from natural language, code retrieval, and code summarization, data-driven models have shown great promise.
no code implementations • NAACL 2018 • Jingyi Zhang, Masao Utiyama, Eiichro Sumita, Graham Neubig, Satoshi Nakamura
Specifically, for an input sentence, we use a search engine to retrieve sentence pairs whose source sides are similar with the input sentence, and then collect $n$-grams that are both in the retrieved target sentences and aligned with words that match in the source sentences, which we call "translation pieces".
no code implementations • NAACL 2018 • Frederick Liu, Han Lu, Graham Neubig
Homographs, words with different meanings but the same surface form, have long caused difficulty for machine translation systems, as it is difficult to select the correct translation based on the context.
no code implementations • 14 Feb 2018 • Odette Scharenborg, Laurent Besacier, Alan Black, Mark Hasegawa-Johnson, Florian Metze, Graham Neubig, Sebastian Stueker, Pierre Godard, Markus Mueller, Lucas Ondel, Shruti Palaskar, Philip Arthur, Francesco Ciannella, Mingxing Du, Elin Larsen, Danny Merkx, Rachid Riad, Liming Wang, Emmanuel Dupoux
We summarize the accomplishments of a multi-disciplinary workshop exploring the computational and scientific issues surrounding the discovery of linguistic units (subwords and words) in a language without orthography.
no code implementations • NeurIPS 2017 • Qizhe Xie, Zihang Dai, Yulun Du, Eduard Hovy, Graham Neubig
Learning meaningful representations that maintain the content necessary for a particular task while filtering away detrimental variations is a problem of great interest in machine learning.
no code implementations • 11 Dec 2017 • Hao Zhang, Shizhen Xu, Graham Neubig, Wei Dai, Qirong Ho, Guangwen Yang, Eric P. Xing
Recent deep learning (DL) models have moved beyond static network architectures to dynamic ones, handling data where the network structure changes every example, such as sequences of variable lengths, trees, and graphs.
no code implementations • 6 Dec 2017 • Christy Li, Dimitris Konomis, Graham Neubig, Pengtao Xie, Carol Cheng, Eric Xing
The hope is that the tool can be used to reduce mis-diagnosis.
no code implementations • IJCNLP 2017 • Jingyi Zhang, Masao Utiyama, Eiichro Sumita, Graham Neubig, Satoshi Nakamura
Compared to traditional statistical machine translation (SMT), neural machine translation (NMT) often sacrifices adequacy for the sake of fluency.
no code implementations • ICLR 2018 • Xuezhe Ma, Pengcheng Yin, Jingzhou Liu, Graham Neubig, Eduard Hovy
Reward augmented maximum likelihood (RAML), a simple and effective learning framework to directly optimize towards the reward function in structured prediction tasks, has led to a number of impressive empirical successes.
no code implementations • 1 Aug 2017 • Kartik Goyal, Graham Neubig, Chris Dyer, Taylor Berg-Kirkpatrick
In experiments, we show that optimizing this new training objective yields substantially better results on two sequence tasks (Named Entity Recognition and CCG Supertagging) when compared with both cross entropy trained greedy decoding and cross entropy trained beam decoding baselines.
Ranked #3 on Motion Segmentation on Hopkins155
no code implementations • 15 Sep 2017 • Matthias Sperber, Graham Neubig, Jan Niehues, Satoshi Nakamura, Alex Waibel
We investigate the problem of manually correcting errors from an automatic speech transcript in a cost-sensitive fashion.
no code implementations • ACL 2017 • Chunting Zhou, Graham Neubig
Labeled sequence transduction is a task of transforming one sequence into another sequence that satisfies desiderata specified by a set of labels.
no code implementations • EMNLP 2017 • Matthias Sperber, Graham Neubig, Jan Niehues, Alex Waibel
In this work, we extend the TreeLSTM (Tai et al., 2015) into a LatticeLSTM that is able to consume word lattices, and can be used as encoder in an attentional encoder-decoder model.
1 code implementation • WS 2017 • Michael Denkowski, Graham Neubig
As a result, it is often difficult to determine whether improvements from research will carry over to systems deployed for real-world use.
no code implementations • WS 2017 • Makoto Morishita, Yusuke Oda, Graham Neubig, Koichiro Yoshino, Katsuhito Sudoh, Satoshi Nakamura
Training of neural machine translation (NMT) models usually uses mini-batches for efficiency purposes.
no code implementations • ACL 2017 • Yusuke Oda, Philip Arthur, Graham Neubig, Koichiro Yoshino, Satoshi Nakamura
In this paper, we propose a new method for calculating the output layer in neural machine translation systems.
no code implementations • WS 2016 • Graham Neubig
This year, the Nara Institute of Science and Technology (NAIST)/Carnegie Mellon University (CMU) submission to the Japanese-English translation track of the 2016 Workshop on Asian Translation was based on attentional neural machine translation (NMT) models.
no code implementations • WS 2015 • Graham Neubig, Makoto Morishita, Satoshi Nakamura
We further perform a detailed analysis of reasons for this increase, finding that the main contributions of the neural models lie in improvement of the grammatical correctness of the output, as opposed to improvements in lexical choice of content words.
no code implementations • EMNLP 2018 • Xinyi Wang, Hieu Pham, Zihang Dai, Graham Neubig
In this work, we examine methods for data augmentation for text-based tasks such as neural machine translation (NMT).
no code implementations • IWSLT (EMNLP) 2018 • Yuta Nishimura, Katsuhito Sudoh, Graham Neubig, Satoshi Nakamura
By using information from these multiple sources, these systems achieve large gains in accuracy.
no code implementations • 19 Oct 2018 • Elizabeth Salesky, Andrew Runge, Alex Coda, Jan Niehues, Graham Neubig
However, the granularity of these subword units is a hyperparameter to be tuned for each language and task, using methods such as grid search.
no code implementations • 13 Dec 2018 • Graham Neubig, Patrick Littell, Chian-Yu Chen, Jean Lee, Zirui Li, Yu-Hsiang Lin, Yuyan Zhang
In this extended abstract, we describe the beginnings of a new project that will attempt to ease this language documentation process through the use of natural language processing (NLP) technology.
no code implementations • EACL 2017 • Oliver Adams, Adam Makarucha, Graham Neubig, Steven Bird, Trevor Cohn
We investigate the use of such lexicons to improve language models when textual training data is limited to as few as a thousand sentences.
no code implementations • NAACL 2018 • Austin Matthews, Graham Neubig, Chris Dyer
Languages with productive morphology pose problems for language models that generate words from a fixed vocabulary.
no code implementations • NAACL 2018 • Graham Neubig, Miltiadis Allamanis
As a result, in the past several years there has been an increasing research interest in methods that focus on the intersection of programming and natural language, allowing users to use natural language to interact with computers in the complex ways that programs allow us to do.
no code implementations • WS 2017 • Ravich, Abhilasha er, Thomas Manzini, Matthias Grabmair, Graham Neubig, Jonathan Francis, Eric Nyberg
Wang et al. (2015) proposed a method to build semantic parsing datasets by generating canonical utterances using a grammar and having crowdworkers paraphrase them into natural wording.
no code implementations • WS 2017 • Toshiaki Nakazawa, Shohei Higashiyama, Chenchen Ding, Hideya Mino, Isao Goto, Hideto Kazawa, Yusuke Oda, Graham Neubig, Sadao Kurohashi
For the WAT2017, 12 institutions participated in the shared tasks.
no code implementations • WS 2016 • Toshiaki Nakazawa, Chenchen Ding, Hideya Mino, Isao Goto, Graham Neubig, Sadao Kurohashi
For the WAT2016, 15 institutions participated in the shared tasks.
no code implementations • COLING 2016 • Matthias Sperber, Graham Neubig, Jan Niehues, Sebastian St{\"u}ker, Alex Waibel
Evaluating the quality of output from language processing systems such as machine translation or speech recognition is an essential step in ensuring that they are sufficient for practical use.
no code implementations • ICLR 2019 • Paul Michel, Graham Neubig, Xi-An Li, Juan Miguel Pino
Adversarial examples have been shown to be an effective way of assessing the robustness of neural sequence-to-sequence (seq2seq) models, by applying perturbations to the input of a model leading to large degradation in performance.
no code implementations • 22 Jan 2019 • Xiang Kong, Bohan Li, Graham Neubig, Eduard Hovy, Yiming Yang
In this work, we propose a method for neural dialogue response generation that allows not only generating semantically reasonable responses according to the dialogue history, but also explicitly controlling the sentiment of the response via sentiment labels.
no code implementations • TACL 2015 • Philip Arthur, Graham Neubig, Sakriani Sakti, Tomoki Toda, Satoshi Nakamura
We propose a new method for semantic parsing of ambiguous and ungrammatical input, such as search queries.
no code implementations • TACL 2014 • Matthias Sperber, Mirjam Simantzik, Graham Neubig, Satoshi Nakamura, Alex Waibel
In this paper, we study the problem of manually correcting automatic annotations of natural language in as efficient a manner as possible.
no code implementations • LREC 2014 • Hiroaki Shimizu, Graham Neubig, Sakriani Sakti, Tomoki Toda, Satoshi Nakamura
This makes it possible to compare translation data with simultaneous interpretation data.
no code implementations • LREC 2014 • Shinsuke Mori, Graham Neubig
The experimental results showed that the annotated sentence addition to the training corpus is better than the entries addition to the dictionary.
no code implementations • LREC 2014 • Sakriani Sakti, Keigo Kubo, Sho Matsumiya, Graham Neubig, Tomoki Toda, Satoshi Nakamura, Fumihiro Adachi, Ryosuke Isotani
This paper outlines the recent development on multilingual medical data and multilingual speech recognition system for network-based speech-to-speech translation in the medical domain.
no code implementations • 24 Feb 2019 • Aditi Chaudhary, Siddharth Dalmia, Junjie Hu, Xinjian Li, Austin Matthews, Aldrian Obaja Muis, Naoki Otani, Shruti Rijhwani, Zaid Sheikh, Nidhi Vyas, Xinyi Wang, Jiateng Xie, Ruochen Xu, Chunting Zhou, Peter J. Jansen, Yiming Yang, Lori Levin, Florian Metze, Teruko Mitamura, David R. Mortensen, Graham Neubig, Eduard Hovy, Alan W. black, Jaime Carbonell, Graham V. Horwood, Shabnam Tafreshi, Mona Diab, Efsun S. Kayi, Noura Farra, Kathleen McKeown
This paper describes the ARIEL-CMU submissions to the Low Resource Human Language Technologies (LoReHLT) 2018 evaluations for the tasks Machine Translation (MT), Entity Discovery and Linking (EDL), and detection of Situation Frames in Text and Speech (SF Text and Speech).
no code implementations • TACL 2019 • Matthias Sperber, Graham Neubig, Jan Niehues, Alex Waibel
Speech translation has traditionally been approached through cascaded models consisting of a speech recognizer trained on a corpus of transcribed speech, and a machine translation system trained on parallel texts.
no code implementations • ACL 2019 • Xinyi Wang, Graham Neubig
To improve low-resource Neural Machine Translation (NMT) with multilingual corpora, training on the most related high-resource language only is often more effective than using all data available (Neubig and Hu, 2018).
no code implementations • ACL 2019 • Matthias Sperber, Graham Neubig, Ngoc-Quan Pham, Alex Waibel
Lattices are an efficient and effective method to encode ambiguity of upstream systems in natural language processing tasks, for example to compactly capture multiple speech recognition hypotheses, or to represent multiple linguistic analyses.
no code implementations • NAACL 2019 • Shonosuke Ishiwatari, Hiroaki Hayashi, Naoki Yoshinaga, Graham Neubig, Shoetsu Sato, Masashi Toyoda, Masaru Kitsuregawa
When reading a text, it is common to become stuck on unfamiliar words and phrases, such as polysemous words with novel senses, rarely used idioms, internet slang, or emerging entities.
no code implementations • LREC 2016 • Matthias Sperber, Graham Neubig, Satoshi Nakamura, Alex Waibel
Our goal is to improve the human transcription quality via appropriate user interface design.
no code implementations • ACL 2019 • Mengzhou Xia, Xiang Kong, Antonios Anastasopoulos, Graham Neubig
Translation to or from low-resource languages LRLs poses challenges for machine translation in terms of both adequacy and fluency.
no code implementations • ACL 2019 • John Wieting, Taylor Berg-Kirkpatrick, Kevin Gimpel, Graham Neubig
While most neural machine translation (NMT)systems are still trained using maximum likelihood estimation, recent work has demonstrated that optimizing systems to directly improve evaluation metrics such as BLEU can significantly improve final translation accuracy.
no code implementations • ACL 2019 • Pengcheng Yin, Graham Neubig
Semantic parsing considers the task of transducing natural language (NL) utterances into machine executable meaning representations (MRs).
Ranked #4 on Code Generation on Django
no code implementations • 8 Aug 2019 • Denis Peskov, Joe Barrow, Pedro Rodriguez, Graham Neubig, Jordan Boyd-Graber
We investigate and mitigate the effects of noise from Automatic Speech Recognition systems on two factoid Question Answering (QA) tasks.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +6
no code implementations • 21 Aug 2019 • Hiroaki Hayashi, Zecong Hu, Chenyan Xiong, Graham Neubig
In this paper, we propose Latent Relation Language Models (LRLMs), a class of language models that parameterizes the joint distribution over the words in a document and the entities that occur therein via knowledge graph relations.
no code implementations • WS 2019 • Bhargavi Paranjape, Graham Neubig
Utterance-level analysis of the speaker{'}s intentions and emotions is a core task in conversational understanding.
no code implementations • WS 2019 • Hiroaki Hayashi, Yusuke Oda, Alexandra Birch, Ioannis Konstas, Andrew Finch, Minh-Thang Luong, Graham Neubig, Katsuhito Sudoh
This document describes the findings of the Third Workshop on Neural Generation and Translation, held in concert with the annual conference of the Empirical Methods in Natural Language Processing (EMNLP 2019).
no code implementations • CONLL 2019 • Austin Matthews, Graham Neubig, Chris Dyer
Recurrent neural network grammars generate sentences using phrase-structure syntax and perform very well on both parsing and language modeling.
no code implementations • ICLR 2020 • Chunting Zhou, Graham Neubig, Jiatao Gu
We find that knowledge distillation can reduce the complexity of data sets and help NAT to model the variations in the output data.
no code implementations • LREC 2020 • David R. Mortensen, Xinjian Li, Patrick Littell, Alexis Michaud, Shruti Rijhwani, Antonios Anastasopoulos, Alan W. black, Florian Metze, Graham Neubig
While phonemic representations are language specific, phonetic representations (stated in terms of (allo)phones) are much closer to a universal (language-independent) transcription.
no code implementations • LREC 2020 • Graham Neubig, Shruti Rijhwani, Alexis Palmer, Jordan MacKenzie, Hilaria Cruz, Xinjian Li, Matthew Lee, Aditi Chaudhary, Luke Gessler, Steven Abney, Shirley Anugrah Hayati, Antonios Anastasopoulos, Olga Zamaraeva, Emily Prud'hommeaux, Jennette Child, Sara Child, Rebecca Knowles, Sarah Moeller, Jeffrey Micher, Yiyuan Li, Sydney Zink, Mengzhou Xia, Roshan S Sharma, Patrick Littell
Despite recent advances in natural language processing and other language technology, the application of such technology to language documentation and conservation has been limited.
no code implementations • ACL 2020 • Keita Kurita, Paul Michel, Graham Neubig
Recently, NLP has seen a surge in the usage of large pre-trained models.
no code implementations • WS 2020 • Kenneth Heafield, Hiroaki Hayashi, Yusuke Oda, Ioannis Konstas, Andrew Finch, Graham Neubig, Xi-An Li, Alex Birch, ra
We describe the finding of the Fourth Workshop on Neural Generation and Translation, held in concert with the annual conference of the Association for Computational Linguistics (ACL 2020).
no code implementations • WS 2020 • Nikitha Murikinati, Antonios Anastasopoulos, Graham Neubig
Cross-lingual transfer between typologically related languages has been proven successful for the task of morphological inflection.
no code implementations • EMNLP (NLP-COVID19) 2020 • Antonios Anastasopoulos, Alessandro Cattelan, Zi-Yi Dou, Marcello Federico, Christian Federman, Dmitriy Genzel, Francisco Guzmán, Junjie Hu, Macduff Hughes, Philipp Koehn, Rosie Lazar, Will Lewis, Graham Neubig, Mengmeng Niu, Alp Öktem, Eric Paquin, Grace Tang, Sylwia Tur
Further, the team is converting the test and development data into translation memories (TMXs) that can be used by localizers from and to any of the languages.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Luyu Gao, Xinyi Wang, Graham Neubig
To improve the performance of Neural Machine Translation~(NMT) for low-resource languages~(LRL), one effective strategy is to leverage parallel data from a related high-resource language~(HRL).
no code implementations • NAACL 2021 • Junjie Hu, Melvin Johnson, Orhan Firat, Aditya Siddhant, Graham Neubig
Pre-trained cross-lingual encoders such as mBERT (Devlin et al., 2019) and XLMR (Conneau et al., 2020) have proven to be impressively effective at enabling transfer-learning of NLP systems from high-resource languages to low-resource languages.
no code implementations • 2 Nov 2020 • Aditi Chaudhary, Antonios Anastasopoulos, Zaid Sheikh, Graham Neubig
Active learning (AL) uses a data selection algorithm to select useful training samples to minimize annotation cost.
no code implementations • 26 Nov 2020 • Nicholas Roberts, Davis Liang, Graham Neubig, Zachary C. Lipton
This makes human-level BLEU a misleading benchmark in that modern MT systems cannot approach human-level BLEU while simultaneously maintaining human-level translation diversity.
no code implementations • 27 Jan 2021 • Frank F. Xu, Bogdan Vasilescu, Graham Neubig
A great part of software development involves conceptualizing or communicating the underlying procedures and logic that needs to be expressed in programs.
Code Generation Data Visualization Software Engineering
no code implementations • 4 Apr 2021 • Kathleen Siminyu, Xinjian Li, Antonios Anastasopoulos, David Mortensen, Michael R. Marlo, Graham Neubig
Models pre-trained on multiple languages have shown significant promise for improving speech recognition, particularly for low-resource languages.
no code implementations • MTSummit 2021 • Amit Moryossef, Kayo Yin, Graham Neubig, Yoav Goldberg
Sign language translation (SLT) is often decomposed into video-to-gloss recognition and gloss-to-text translation, where a gloss is a sequence of transcribed spoken-language words in the order in which they are signed.
Data Augmentation Low-Resource Neural Machine Translation +3
no code implementations • NAACL 2021 • Pengcheng Yin, Hao Fang, Graham Neubig, Adam Pauls, Emmanouil Antonios Platanios, Yu Su, Sam Thomson, Jacob Andreas
We describe a span-level supervised attention loss that improves compositional generalization in semantic parsers.
no code implementations • WMT (EMNLP) 2021 • Junjie Hu, Graham Neubig
Neural machine translation (NMT) is sensitive to domain shift.
no code implementations • 12 Jul 2021 • Hao Zhu, Graham Neubig, Yonatan Bisk
Positive results from our experiments hint at the importance of explicitly modeling communication as a socio-pragmatic progress.
no code implementations • COLING 2020 • Antonios Anastasopoulos, Christopher Cox, Graham Neubig, Hilaria Cruz
This tutorial will focus on NLP for endangered languages documentation and revitalization.
no code implementations • COLING 2020 • Xingyuan Zhao, Satoru Ozaki, Antonios Anastasopoulos, Graham Neubig, Lori Levin
Interlinear Glossed Text (IGT) is a widely used format for encoding linguistic information in language documentation projects and scholarly papers.
no code implementations • 15 Sep 2021 • Patrick Fernandes, Kayo Yin, Emmy Liu, André F. T. Martins, Graham Neubig
Although proper handling of discourse significantly contributes to the quality of machine translation (MT), these improvements are not adequately measured in common translation quality metrics.
no code implementations • NAACL (SUKI) 2022 • Shuyan Zhou, Pengcheng Yin, Graham Neubig
When humans conceive how to perform a particular task, they do so hierarchically: splitting higher-level tasks into smaller sub-tasks.
no code implementations • 28 Sep 2021 • Alex Shypula, Pengcheng Yin, Jeremy Lacomis, Claire Le Goues, Edward Schwartz, Graham Neubig
We also report that SILO's rate of superoptimization on our test set is over five times that of a standard policy gradient approach and a model pre-trained on compiler optimization demonstration.
no code implementations • ICLR 2022 • Frank F. Xu, Junxian He, Graham Neubig, Vincent J. Hellendoorn
Structural locality is a ubiquitous feature of real-world datasets, wherein data points are organized into local hierarchies.
no code implementations • 29 Sep 2021 • Melanie Sclar, Graham Neubig, Yonatan Bisk
Theory of mind (ToM), the ability to understand others' thoughts and desires, is a cornerstone of human intelligence.
no code implementations • Findings (ACL) 2022 • Ting-Rui Chiang, Yi-Pei Chen, Yi-Ting Yeh, Graham Neubig
While multilingual training is now an essential ingredient in machine translation (MT) systems, recent work has demonstrated that it has different effects in different multilingual settings, such as many-to-one, one-to-many, and many-to-many learning.
no code implementations • ACL 2022 • Pengcheng Yin, John Wieting, Avirup Sil, Graham Neubig
Semantic parsers map natural language utterances into meaning representations (e. g., programs).
no code implementations • EAMT 2020 • André F. T. Martins, Joao Graca, Paulo Dimas, Helena Moniz, Graham Neubig
This paper presents the Multilingual Artificial Intelligence Agent Assistant (MAIA), a project led by Unbabel with the collaboration of CMU, INESC-ID and IT Lisbon.
no code implementations • EMNLP 2021 • Aditi Chaudhary, Kayo Yin, Antonios Anastasopoulos, Graham Neubig
Learning fine-grained distinctions between vocabulary items is a key challenge in learning a new language.
no code implementations • WMT (EMNLP) 2020 • Lucia Specia, Zhenhao Li, Juan Pino, Vishrav Chaudhary, Francisco Guzmán, Graham Neubig, Nadir Durrani, Yonatan Belinkov, Philipp Koehn, Hassan Sajjad, Paul Michel, Xian Li
We report the findings of the second edition of the shared task on improving robustness in Machine Translation (MT).
no code implementations • NAACL (AmericasNLP) 2021 • Manuel Mager, Arturo Oncevay, Abteen Ebrahimi, John Ortega, Annette Rios, Angela Fan, Ximena Gutierrez-Vasques, Luis Chiruzzo, Gustavo Giménez-Lugo, Ricardo Ramos, Ivan Vladimir Meza Ruiz, Rolando Coto-Solano, Alexis Palmer, Elisabeth Mager-Hois, Vishrav Chaudhary, Graham Neubig, Ngoc Thang Vu, Katharina Kann
This paper presents the results of the 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas.
no code implementations • ACL 2022 • Junjie Hu, Hiroaki Hayashi, Kyunghyun Cho, Graham Neubig
It has been shown that machine translation models usually generate poor translations for named entities that are infrequent in the training corpus.
no code implementations • 27 Sep 2018 • Danish Pruthi, Mansi Gupta, Nitish Kumar Kulkarni, Graham Neubig, Eduard Hovy
Neural models achieve state-of-the-art performance due to their ability to extract salient features useful to downstream tasks.
no code implementations • 27 Sep 2018 • Barun Patra, Joel Ruben Antony Moniz, Sarthak Garg, Matthew R Gormley, Graham Neubig
We then propose Bilingual Lexicon Induction with Semi-Supervision (BLISS) --- a novel semi-supervised approach that relaxes the isometric assumption while leveraging both limited aligned bilingual lexicons and a larger set of unaligned word embeddings, as well as a novel hubness filtering technique.
no code implementations • 25 Sep 2019 • Paul Michel, Elisabeth Salesky, Graham Neubig
Regularization-based continual learning approaches generally prevent catastrophic forgetting by augmenting the training loss with an auxiliary objective.
no code implementations • ACL 2022 • Yang Xiao, Jinlan Fu, Weizhe Yuan, Vijay Viswanathan, Zhoumianze Liu, Yixin Liu, Graham Neubig, PengFei Liu
Despite data's crucial role in machine learning, most existing tools and research tend to focus on systems on top of existing data rather than how to interpret and manipulate data.
no code implementations • 25 Mar 2022 • Aditi Chaudhary, Zaid Sheikh, David R Mortensen, Antonios Anastasopoulos, Graham Neubig
Each language has its own complex systems of word, phrase, and sentence construction, the guiding principles of which are often summarized in grammar descriptions for the consumption of linguists or language learners.
no code implementations • IWSLT (ACL) 2022 • Brian Yan, Patrick Fernandes, Siddharth Dalmia, Jiatong Shi, Yifan Peng, Dan Berrebbi, Xinyi Wang, Graham Neubig, Shinji Watanabe
We use additional paired Modern Standard Arabic data (MSA) to directly improve the speech recognition (ASR) and machine translation (MT) components of our cascaded systems.
no code implementations • 10 Jun 2022 • Aditi Chaudhary, Arun Sampath, Ashwin Sheshadri, Antonios Anastasopoulos, Graham Neubig
This process is challenging because i) it requires that such experts be accessible and have the necessary resources, and ii) even if there are such experts, describing all the intricacies of a language is time-consuming and prone to omission.
no code implementations • 23 Aug 2022 • Haris Widjaja, Kiril Gashteovski, Wiem Ben Rim, PengFei Liu, Christopher Malon, Daniel Ruffinelli, Carolin Lawrence, Graham Neubig
Knowledge Graphs (KGs) store information in the form of (head, predicate, tail)-triples.
no code implementations • COLING 2022 • Zhengbao Jiang, Jun Araki, Haibo Ding, Graham Neubig
In sum, these results demonstrate that multi-hop reasoning does not emerge naturally in generative QA models, but can be encouraged by advances in training or modeling techniques.
no code implementations • 11 Oct 2022 • Brian Yan, Siddharth Dalmia, Yosuke Higuchi, Graham Neubig, Florian Metze, Alan W Black, Shinji Watanabe
Connectionist Temporal Classification (CTC) is a widely used approach for automatic speech recognition (ASR) that performs conditionally independent monotonic alignment.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • 13 Oct 2022 • Jimin Sun, Patrick Fernandes, Xinyi Wang, Graham Neubig
Recent work on tokenizer-free multilingual pretrained models show promising results in improving cross-lingual transfer and reducing engineering overhead (Clark et al., 2022; Xue et al., 2022).
no code implementations • 30 Oct 2022 • Machel Reid, Vincent J. Hellendoorn, Graham Neubig
In text generation, models that generate text from scratch one token at a time are currently the dominant paradigm.
no code implementations • 12 Dec 2022 • Yiwei Qin, Graham Neubig, PengFei Liu
Recently, a large number of tuning strategies have been proposed to adapt pre-trained language models to downstream tasks.
no code implementations • 26 Feb 2023 • Shruti Rijhwani, Daisy Rosenblum, Michayla King, Antonios Anastasopoulos, Graham Neubig
There has been recent interest in improving optical character recognition (OCR) for endangered languages, particularly because a large number of documents and books in these languages are not in machine-readable formats.
Optical Character Recognition Optical Character Recognition (OCR)
no code implementations • 1 May 2023 • Patrick Fernandes, Aman Madaan, Emmy Liu, António Farinhas, Pedro Henrique Martins, Amanda Bertsch, José G. C. de Souza, Shuyan Zhou, Tongshuang Wu, Graham Neubig, André F. T. Martins
Many recent advances in natural language generation have been fueled by training large language models on internet-scale data.
no code implementations • CVPR 2023 • Hao Zhu, Raghav Kapoor, So Yeon Min, Winson Han, Jiatai Li, Kaiwen Geng, Graham Neubig, Yonatan Bisk, Aniruddha Kembhavi, Luca Weihs
Humans constantly explore and learn about their environment out of curiosity, gather information, and update their models of the world.
1 code implementation • 19 May 2023 • Masahiro Kaneko, Graham Neubig, Naoaki Okazaki
Humans work together to solve common problems by having discussions, explaining, and agreeing or disagreeing with each other.
no code implementations • 24 May 2023 • Yueqi Song, Catherine Cui, Simran Khanuja, PengFei Liu, Fahim Faisal, Alissa Ostapenko, Genta Indra Winata, Alham Fikri Aji, Samuel Cahyawijaya, Yulia Tsvetkov, Antonios Anastasopoulos, Graham Neubig
Despite the major advances in NLP, significant disparities in NLP system performance across languages still exist.
no code implementations • 25 May 2023 • Anubha Kabra, Emmy Liu, Simran Khanuja, Alham Fikri Aji, Genta Indra Winata, Samuel Cahyawijaya, Anuoluwapo Aremu, Perez Ogayo, Graham Neubig
Figurative language permeates human communication, but at the same time is relatively understudied in NLP.
1 code implementation • 29 May 2023 • Lindia Tjuatja, Emmy Liu, Lori Levin, Graham Neubig
Recent advances in large language models have prompted researchers to examine their abilities across a variety of linguistic tasks, but little has been done to investigate how models handle the interactions in meaning across words and larger syntactic forms -- i. e. phenomena at the intersection of syntax and semantics.
no code implementations • 11 Jun 2023 • Manuel Mager, Rajat Bhatnagar, Graham Neubig, Ngoc Thang Vu, Katharina Kann
Neural models have drastically advanced state of the art for machine translation (MT) between high-resource languages.
no code implementations • 10 Jul 2023 • I-Chun Chern, Zhiruo Wang, Sanjan Das, Bhavuk Sharma, PengFei Liu, Graham Neubig
Modern abstractive summarization models often generate summaries that contain hallucinated or contradictory information.
no code implementations • 14 Aug 2023 • Patrick Fernandes, Daniel Deutsch, Mara Finkelstein, Parker Riley, André F. T. Martins, Graham Neubig, Ankush Garg, Jonathan H. Clark, Markus Freitag, Orhan Firat
Automatic evaluation of machine translation (MT) is a critical tool driving the rapid iterative development of MT systems.
no code implementations • 27 Oct 2023 • Aditi Chaudhary, Arun Sampath, Ashwin Sheshadri, Antonios Anastasopoulos, Graham Neubig
This is challenging because i) it requires that such experts be accessible and have the necessary resources, and ii) describing all the intricacies of a language is time-consuming and prone to omission.
no code implementations • 10 Nov 2023 • Simran Khanuja, Srinivas Gowriraj, Lucio Dery, Graham Neubig
In this paper, we introduce DEMUX, a framework that prescribes the exact data-points to label from vast amounts of unlabelled multilingual data, having unknown degrees of overlap with the target set.
1 code implementation • 16 Nov 2023 • Anubha Kabra, Sanketh Rangreji, Yash Mathur, Aman Madaan, Emmy Liu, Graham Neubig
Our analysis uncovers that prompting styles that produce lesser diversity in generations also have more calibrated results, and thus we also experiment with inducing lower generation diversity using temperature scaling and find that for certain temperatures, PAL is not only more accurate but is also more calibrated than COT.
no code implementations • 12 Jan 2024 • Abhika Mishra, Akari Asai, Vidhisha Balachandran, Yizhong Wang, Graham Neubig, Yulia Tsvetkov, Hannaneh Hajishirzi
On our benchmark, our automatic and human evaluations show that FAVA significantly outperforms ChatGPT and GPT-4 on fine-grained hallucination detection, and edits suggested by FAVA improve the factuality of LM-generated text.
no code implementations • 20 Feb 2024 • Zhengbao Jiang, Zhiqing Sun, Weijia Shi, Pedro Rodriguez, Chunting Zhou, Graham Neubig, Xi Victoria Lin, Wen-tau Yih, Srinivasan Iyer
The standard recipe for doing so involves continued pre-training on new documents followed by instruction-tuning on question-answer (QA) pairs.
1 code implementation • 3 Mar 2024 • Yueqi Song, Simran Khanuja, Graham Neubig
NLP models today strive for supporting multiple languages and modalities, improving accessibility for diverse users.
no code implementations • 11 Mar 2024 • Michael Ginn, Lindia Tjuatja, Taiqi He, Enora Rice, Graham Neubig, Alexis Palmer, Lori Levin
A key aspect of language documentation is the creation of annotated text in a format such as interlinear glossed text (IGT), which captures fine-grained morphosyntactic analyses in a morpheme-by-morpheme format.
no code implementations • 19 Mar 2024 • Taiqi He, Kwanghee Choi, Lindia Tjuatja, Nathaniel R. Robinson, Jiatong Shi, Shinji Watanabe, Graham Neubig, David R. Mortensen, Lori Levin
Thousands of the world's languages are in danger of extinction--a tremendous threat to cultural identities and human language diversity.
no code implementations • 18 Mar 2024 • Zhiruo Wang, Zhoujun Cheng, Hao Zhu, Daniel Fried, Graham Neubig
Language models (LMs) are powerful yet mostly for text generation tasks.
no code implementations • 3 Apr 2024 • Emmy Liu, Graham Neubig, Jacob Andreas
Modern language models (LMs) can learn to perform new tasks in different ways: in instruction following, the target task is described explicitly in natural language; in few-shot prompting, the task is specified implicitly with a small number of examples; in instruction inference, LMs are presented with in-context examples and are then prompted to generate a natural language task description before making predictions.
no code implementations • 9 Apr 2024 • Junpeng Liu, YiFan Song, Bill Yuchen Lin, Wai Lam, Graham Neubig, Yuanzhi Li, Xiang Yue
Multimodal Large Language models (MLLMs) have shown promise in web-related tasks, but evaluating their performance in the web domain remains a challenge due to the lack of comprehensive benchmarks.
2 code implementations • 5 Mar 2017 • Graham Neubig
This tutorial introduces a new and powerful set of techniques variously called "neural machine translation" or "neural sequence-to-sequence models".
1 code implementation • WS 2019 • Zi-Yi Dou, Xinyi Wang, Junjie Hu, Graham Neubig
We then use these learned domain differentials to adapt models for the target task accordingly.
1 code implementation • 17 Dec 2021 • Siddhant Arora, Danish Pruthi, Norman Sadeh, William W. Cohen, Zachary C. Lipton, Graham Neubig
Through our evaluation, we observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
2 code implementations • 27 May 2022 • Lucio M. Dery, Paul Michel, Mikhail Khodak, Graham Neubig, Ameet Talwalkar
Auxiliary objectives, supplementary learning signals that are introduced to help aid learning on data-starved or highly complex end-tasks, are commonplace in machine learning.
1 code implementation • WS 2018 • Junjie Hu, Wei-Cheng Chang, Yuexin Wu, Graham Neubig
In this paper, propose a method to effectively encode the local and global contextual information for each target word using a three-part neural network approach.
1 code implementation • EMNLP 2020 • Aditi Chaudhary, Antonios Anastasopoulos, Adithya Pratapa, David R. Mortensen, Zaid Sheikh, Yulia Tsvetkov, Graham Neubig
Using cross-lingual transfer, even with no expert annotations in the language of interest, our framework extracts a grammatical specification which is nearly equivalent to those created with large amounts of gold-standard annotated data.
1 code implementation • CoNLL (EMNLP) 2021 • Ruisi Su, Shruti Rijhwani, Hao Zhu, Junxian He, Xinyu Wang, Yonatan Bisk, Graham Neubig
Our experiments find that concreteness is a strong indicator for learning dependency grammars, improving the direct attachment score (DAS) by over 50\% as compared to state-of-the-art models trained on pure text.
1 code implementation • 12 Dec 2022 • Yiwei Qin, Weizhe Yuan, Graham Neubig, PengFei Liu
Both have their advantages; discriminative metrics are able to directly optimize for the problem of distinguishing between good and bad outputs, while generative metrics can be trained using abundant raw text.
1 code implementation • 10 Oct 2023 • Emmy Liu, Aditi Chaudhary, Graham Neubig
Idioms are common in everyday language, but often pose a challenge to translators because their meanings do not follow from the meanings of their parts.
1 code implementation • 15 Nov 2023 • Yuchen Zhou, Emmy Liu, Graham Neubig, Michael J. Tarr, Leila Wehbe
In this work, we systematically explore the divergences between human and machine language processing by examining the differences between LM representations and human brain responses to language as measured by Magnetoencephalography (MEG) across two datasets in which subjects read and listened to narrative stories.
1 code implementation • 3 Apr 2020 • Samuel Läubli, Sheila Castilho, Graham Neubig, Rico Sennrich, Qinlan Shen, Antonio Toral
The quality of machine translation has increased remarkably over the past years, to the degree that it was found to be indistinguishable from professional human translation in a number of empirical investigations.
1 code implementation • 1 Dec 2020 • Danish Pruthi, Rachit Bansal, Bhuwan Dhingra, Livio Baldini Soares, Michael Collins, Zachary C. Lipton, Graham Neubig, William W. Cohen
While many methods purport to explain predictions by highlighting salient features, what aims these explanations serve and how they ought to be evaluated often go unstated.
1 code implementation • 13 Sep 2021 • Aditi Chaudhary, Kayo Yin, Antonios Anastasopoulos, Graham Neubig
Learning fine-grained distinctions between vocabulary items is a key challenge in learning a new language.
1 code implementation • 26 Mar 2018 • Matthias Sperber, Jan Niehues, Graham Neubig, Sebastian Stüker, Alex Waibel
Self-attention is a method of encoding sequences of vectors by relating these vectors to each-other based on pairwise similarities.
1 code implementation • WS 2019 • Shuyan Zhou, Shruti Rijhwani, Graham Neubig
Cross-lingual entity linking (XEL) grounds named entities in a source language to an English Knowledge Base (KB), such as Wikipedia.
1 code implementation • ICML 2020 • Xinyi Wang, Hieu Pham, Paul Michel, Antonios Anastasopoulos, Jaime Carbonell, Graham Neubig
To acquire a new skill, humans learn better and faster if a tutor, based on their current knowledge level, informs them of how much attention they should pay to particular content or practice problems.
1 code implementation • 7 Oct 2022 • Emmy Liu, Graham Neubig
We find that the representation of a parent phrase can be predicted with some accuracy given an affine transformation of its children.
1 code implementation • 1 Jun 2023 • Sameer Jain, Vaishakh Keshava, Swarnashree Mysore Sathyendra, Patrick Fernandes, PengFei Liu, Graham Neubig, Chunting Zhou
Most frameworks that perform such multi-dimensional evaluation require training on large manually or synthetically generated datasets.
1 code implementation • 14 Sep 2023 • Nathaniel R. Robinson, Perez Ogayo, David R. Mortensen, Graham Neubig
Without published experimental evidence on the matter, it is difficult for speakers of the world's diverse languages to know how and whether they can use LLMs for their languages.
1 code implementation • 5 Dec 2023 • Atharva Kulkarni, Lucio Dery, Amrith Setlur, aditi raghunathan, Ameet Talwalkar, Graham Neubig
We primarily consider the standard setting of fine-tuning a pre-trained model, where, following recent work \citep{gururangan2020don, dery2023aang}, we multitask the end task with the pre-training objective constructed from the end task data itself.
1 code implementation • ACL 2018 • Craig Stewart, Nikolai Vogler, Junjie Hu, Jordan Boyd-Graber, Graham Neubig
Simultaneous interpretation, translation of the spoken word in real-time, is both highly challenging and physically demanding.
1 code implementation • NAACL 2018 • Yohan Jo, Shivani Poddar, Byungsoo Jeon, Qinlan Shen, Carolyn P. Rose, Graham Neubig
We present a neural architecture for modeling argumentative dialogue that explicitly models the interplay between an Opinion Holder's (OH's) reasoning and a challenger's argument, with the goal of predicting if the argument successfully changes the OH's view.
1 code implementation • IJCNLP 2019 • Chunting Zhou, Xuezhe Ma, Junjie Hu, Graham Neubig
Despite impressive empirical successes of neural machine translation (NMT) on standard benchmarks, limited parallel data impedes the application of NMT models to many language pairs.
1 code implementation • 24 Apr 2020 • Aman Madaan, Shruti Rijhwani, Antonios Anastasopoulos, Yiming Yang, Graham Neubig
We propose a method of curating high-quality comparable training data for low-resource languages with monolingual annotators.
1 code implementation • EACL 2021 • Zihuiwen Ye, PengFei Liu, Jinlan Fu, Graham Neubig
We perform an analysis of four types of NLP tasks, and both demonstrate the feasibility of fine-grained performance prediction and the necessity to perform reliability analysis for performance prediction methods in the future.
1 code implementation • ACL 2022 • Abteen Ebrahimi, Manuel Mager, Arturo Oncevay, Vishrav Chaudhary, Luis Chiruzzo, Angela Fan, John Ortega, Ricardo Ramos, Annette Rios, Ivan Meza-Ruiz, Gustavo A. Giménez-Lugo, Elisabeth Mager, Graham Neubig, Alexis Palmer, Rolando Coto-Solano, Ngoc Thang Vu, Katharina Kann
Continued pretraining offers improvements, with an average accuracy of 44. 05%.
2 code implementations • 13 Oct 2021 • Damián Blasi, Antonios Anastasopoulos, Graham Neubig
Natural language processing (NLP) systems have become a central technology in communication, education, medicine, artificial intelligence, and many other domains of research and development.
1 code implementation • AKBC 2020 • Zhengbao Jiang, Jun Araki, Donghan Yu, Ruohong Zhang, Wei Xu, Yiming Yang, Graham Neubig
We propose several methods that incorporate both structured and textual information to represent relations for this task.
1 code implementation • ACL 2022 • Damian Blasi, Antonios Anastasopoulos, Graham Neubig
Natural language processing (NLP) systems have become a central technology in communication, education, medicine, artificial intelligence, and many other domains of research and development.
1 code implementation • 27 Oct 2022 • Amanda Bertsch, Graham Neubig, Matthew R. Gormley
As a sample application, we demonstrate that applying perspective shifting to a dialogue summarization dataset (SAMSum) substantially improves the zero-shot performance of extractive news summarization models on this data.
1 code implementation • NAACL 2019 • Nikolai Vogler, Craig Stewart, Graham Neubig
Simultaneous interpretation, the translation of speech from one language to another in real-time, is an inherently difficult and strenuous task.
1 code implementation • EMNLP 2020 • Zi-Yi Dou, Antonios Anastasopoulos, Graham Neubig
Back-translation has proven to be an effective method to utilize monolingual data in neural machine translation (NMT), and iteratively conducting back-translation can further improve the model performance.
1 code implementation • EMNLP 2018 • Aditi Chaudhary, Chunting Zhou, Lori Levin, Graham Neubig, David R. Mortensen, Jaime G. Carbonell
Much work in Natural Language Processing (NLP) has been for resource-rich languages, making generalization to new, less-resourced languages challenging.
1 code implementation • IJCNLP 2019 • Zi-Yi Dou, Junjie Hu, Antonios Anastasopoulos, Graham Neubig
The recent success of neural machine translation models relies on the availability of high quality, in-domain data.
1 code implementation • EMNLP 2021 • Adithya Pratapa, Antonios Anastasopoulos, Shruti Rijhwani, Aditi Chaudhary, David R. Mortensen, Graham Neubig, Yulia Tsvetkov
Text generation systems are ubiquitous in natural language processing applications.
2 code implementations • ICLR 2022 • Lucio M. Dery, Paul Michel, Ameet Talwalkar, Graham Neubig
In most settings of practical concern, machine learning practitioners know in advance what end-task they wish to boost with auxiliary tasks.
1 code implementation • 7 Nov 2023 • Lindia Tjuatja, Valerie Chen, Sherry Tongshuang Wu, Ameet Talwalkar, Graham Neubig
As large language models (LLMs) become more capable, there is growing excitement about the possibility of using LLMs as proxies for humans in real-world tasks where subjective labels are desired, such as in surveys and opinion polling.
1 code implementation • NAACL 2021 • Yixin Liu, Graham Neubig, John Wieting
In most cases, the lack of parallel corpora makes it impossible to directly train supervised models for the text style transfer task.
1 code implementation • EMNLP 2021 • Machel Reid, Junjie Hu, Graham Neubig, Yutaka Matsuo
Reproducible benchmarks are crucial in driving progress of machine translation research.
1 code implementation • EMNLP 2017 • Varun Gangal, Harsh Jhamtani, Graham Neubig, Eduard Hovy, Eric Nyberg
Portmanteaus are a word formation phenomenon where two words are combined to form a new word.
2 code implementations • EMNLP 2020 • John Wieting, Graham Neubig, Taylor Berg-Kirkpatrick
Semantic sentence embedding models encode natural language sentences into vectors, such that closeness in embedding space indicates closeness in the semantics between the sentences.
1 code implementation • 1 Apr 2024 • Simran Khanuja, Sathyanarayanan Ramamoorthy, Yueqi Song, Graham Neubig
First, we build three pipelines comprising state-of-the-art generative models to do the task.
1 code implementation • 9 Nov 2018 • Shruti Rijhwani, Jiateng Xie, Graham Neubig, Jaime Carbonell
To address this problem, we investigate zero-shot cross-lingual entity linking, in which we assume no bilingual lexical resources are available in the source low-resource language.
1 code implementation • 29 Nov 2019 • Ansong Ni, Pengcheng Yin, Graham Neubig
Experiments on WikiTableQuestions with human annotators show that our method can improve the performance with only 100 active queries, especially for weakly-supervised parsers learnt from a cold start.
1 code implementation • 1 Jul 2022 • Perez Ogayo, Graham Neubig, Alan W Black
This paper focuses on speech synthesis for low-resourced African languages, from corpus creation to sharing and deploying the Text-to-Speech (TTS) systems.
1 code implementation • 3 Apr 2024 • Zaid Sheikh, Antonios Anastasopoulos, Shruti Rijhwani, Lindia Tjuatja, Robbie Jimerson, Graham Neubig
Effectively using Natural Language Processing (NLP) tools in under-resourced languages requires a thorough understanding of the language itself, familiarity with the latest models and training methodologies, and technical expertise to deploy these models.
1 code implementation • ACL 2020 • Antonios Anastasopoulos, Graham Neubig
Most of recent work in cross-lingual word embeddings is severely Anglocentric.
1 code implementation • Findings (EMNLP) 2021 • Xinyi Wang, Yulia Tsvetkov, Sebastian Ruder, Graham Neubig
Adapters are light-weight modules that allow parameter-efficient fine-tuning of pretrained models.
1 code implementation • ACL 2022 • Shuyan Zhou, Li Zhang, Yue Yang, Qing Lyu, Pengcheng Yin, Chris Callison-Burch, Graham Neubig
To this end, we develop a simple and efficient method that links steps (e. g., "purchase a camera") in an article to other articles with similar goals (e. g., "how to choose a camera"), recursively constructing the KB.
1 code implementation • NAACL 2022 • Patrick Fernandes, António Farinhas, Ricardo Rei, José G. C. de Souza, Perez Ogayo, Graham Neubig, André F. T. Martins
Despite the progress in machine translation quality estimation and evaluation in the last years, decoding in neural machine translation (NMT) is mostly oblivious to this and centers around finding the most probable translation according to the model (MAP decoding), approximated with beam search.
1 code implementation • 2 Mar 2023 • Andy Liu, Hao Zhu, Emmy Liu, Yonatan Bisk, Graham Neubig
We also find some evidence that increasing task difficulty in the training process results in more fluent and precise utterances in evaluation.
2 code implementations • ACL 2017 • Frederick Liu, Han Lu, Chieh Lo, Graham Neubig
Previous work has modeled the compositionality of words by creating character-level models of meaning, reducing problems of sparsity for rare words.
1 code implementation • ACL 2019 • Barun Patra, Joel Ruben Antony Moniz, Sarthak Garg, Matthew R. Gormley, Graham Neubig
We then propose Bilingual Lexicon Induction with Semi-Supervision (BLISS) --- a semi-supervised approach that relaxes the isometric assumption while leveraging both limited aligned bilingual lexicons and a larger set of unaligned word embeddings, as well as a novel hubness filtering technique.