no code implementations • TACL 2014 • Matthias Sperber, Mirjam Simantzik, Graham Neubig, Satoshi Nakamura, Alex Waibel
In this paper, we study the problem of manually correcting automatic annotations of natural language in as efficient a manner as possible.
no code implementations • LREC 2014 • Shinsuke Mori, Graham Neubig
The experimental results showed that the annotated sentence addition to the training corpus is better than the entries addition to the dictionary.
no code implementations • LREC 2014 • Hiroaki Shimizu, Graham Neubig, Sakriani Sakti, Tomoki Toda, Satoshi Nakamura
This makes it possible to compare translation data with simultaneous interpretation data.
no code implementations • LREC 2014 • Sakriani Sakti, Keigo Kubo, Sho Matsumiya, Graham Neubig, Tomoki Toda, Satoshi Nakamura, Fumihiro Adachi, Ryosuke Isotani
This paper outlines the recent development on multilingual medical data and multilingual speech recognition system for network-based speech-to-speech translation in the medical domain.
no code implementations • TACL 2015 • Philip Arthur, Graham Neubig, Sakriani Sakti, Tomoki Toda, Satoshi Nakamura
We propose a new method for semantic parsing of ambiguous and ungrammatical input, such as search queries.
no code implementations • WS 2015 • Graham Neubig, Makoto Morishita, Satoshi Nakamura
We further perform a detailed analysis of reasons for this increase, finding that the main contributions of the neural models lie in improvement of the grammatical correctness of the output, as opposed to improvements in lexical choice of content words.
1 code implementation • NAACL 2016 • Manaal Faruqui, Yulia Tsvetkov, Graham Neubig, Chris Dyer
Morphological inflection generation is the task of generating the inflected form of a given lemma corresponding to a particular linguistic transformation.
no code implementations • LREC 2016 • Matthias Sperber, Graham Neubig, Satoshi Nakamura, Alex Waibel
Our goal is to improve the human transcription quality via appropriate user interface design.
1 code implementation • EMNLP 2016 • Graham Neubig, Chris Dyer
Language models (LMs) are statistical models that calculate probabilities over sequences of words or other discrete symbols.
2 code implementations • EMNLP 2016 • Philip Arthur, Graham Neubig, Satoshi Nakamura
Neural machine translation (NMT) often makes mistakes in translating low-frequency content words that are essential to understanding the meaning of the sentence.
1 code implementation • EMNLP 2016 • Yuta Kikuchi, Graham Neubig, Ryohei Sasano, Hiroya Takamura, Manabu Okumura
Neural encoder-decoder models have shown great success in many sequence generation tasks.
1 code implementation • EACL 2017 • Jiatao Gu, Graham Neubig, Kyunghyun Cho, Victor O. K. Li
Translating in real-time, a. k. a.
no code implementations • WS 2016 • Graham Neubig
This year, the Nara Institute of Science and Technology (NAIST)/Carnegie Mellon University (CMU) submission to the Japanese-English translation track of the 2016 Workshop on Asian Translation was based on attentional neural machine translation (NMT) models.
5 code implementations • 4 Nov 2016 • Hiroaki Hayashi, Jayanth Koushik, Graham Neubig
Adaptive gradient methods for stochastic optimization adjust the learning rate for each parameter locally.
1 code implementation • EACL 2017 • Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, Graham Neubig, Noah A. Smith
We investigate what information they learn, from a linguistic perspective, through various ablations to the model and the data, and by augmenting the model with an attention mechanism (GA-RNNG) to enable closer inspection.
Ranked #20 on Constituency Parsing on Penn Treebank
no code implementations • WS 2016 • Toshiaki Nakazawa, Chenchen Ding, Hideya Mino, Isao Goto, Graham Neubig, Sadao Kurohashi
For the WAT2016, 15 institutions participated in the shared tasks.
no code implementations • COLING 2016 • Matthias Sperber, Graham Neubig, Jan Niehues, Sebastian St{\"u}ker, Alex Waibel
Evaluating the quality of output from language processing systems such as machine translation or speech recognition is an essential step in ensuring that they are sufficient for practical use.
4 code implementations • 15 Jan 2017 • Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, Pengcheng Yin
In the static declaration strategy that is used in toolkits like Theano, CNTK, and TensorFlow, the user first defines a computation graph (a symbolic representation of the computation), and then examples are fed into an engine that executes this computation and computes its derivatives.
2 code implementations • 5 Mar 2017 • Graham Neubig
This tutorial introduces a new and powerful set of techniques variously called "neural machine translation" or "neural sequence-to-sequence models".
no code implementations • EACL 2017 • Oliver Adams, Adam Makarucha, Graham Neubig, Steven Bird, Trevor Cohn
We investigate the use of such lexicons to improve language models when textual training data is limited to as few as a thousand sentences.
no code implementations • EMNLP 2017 • Matthias Sperber, Graham Neubig, Jan Niehues, Alex Waibel
In this work, we extend the TreeLSTM (Tai et al., 2015) into a LatticeLSTM that is able to consume word lattices, and can be used as encoder in an attentional encoder-decoder model.
6 code implementations • ACL 2017 • Pengcheng Yin, Graham Neubig
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python.
no code implementations • ACL 2017 • Chunting Zhou, Graham Neubig
Labeled sequence transduction is a task of transforming one sequence into another sequence that satisfies desiderata specified by a set of labels.
2 code implementations • ACL 2017 • Frederick Liu, Han Lu, Chieh Lo, Graham Neubig
Previous work has modeled the compositionality of words by creating character-level models of meaning, reducing problems of sparsity for rare words.
no code implementations • ACL 2017 • Yusuke Oda, Philip Arthur, Graham Neubig, Koichiro Yoshino, Satoshi Nakamura
In this paper, we propose a new method for calculating the output layer in neural machine translation systems.
no code implementations • ICLR 2018 • Xuezhe Ma, Pengcheng Yin, Jingzhou Liu, Graham Neubig, Eduard Hovy
Reward augmented maximum likelihood (RAML), a simple and effective learning framework to directly optimize towards the reward function in structured prediction tasks, has led to a number of impressive empirical successes.
2 code implementations • NeurIPS 2017 • Graham Neubig, Yoav Goldberg, Chris Dyer
Dynamic neural network toolkits such as PyTorch, DyNet, and Chainer offer more flexibility for implementing models that cope with data of varying dimensions and structure, relative to toolkits that operate on statically declared computations (e. g., TensorFlow, CNTK, and Theano).
no code implementations • NeurIPS 2017 • Qizhe Xie, Zihang Dai, Yulun Du, Eduard Hovy, Graham Neubig
Learning meaningful representations that maintain the content necessary for a particular task while filtering away detrimental variations is a problem of great interest in machine learning.
no code implementations • WS 2017 • Makoto Morishita, Yusuke Oda, Graham Neubig, Koichiro Yoshino, Katsuhito Sudoh, Satoshi Nakamura
Training of neural machine translation (NMT) models usually uses mini-batches for efficiency purposes.
1 code implementation • WS 2017 • Michael Denkowski, Graham Neubig
As a result, it is often difficult to determine whether improvements from research will carry over to systems deployed for real-world use.
1 code implementation • EMNLP 2017 • Varun Gangal, Harsh Jhamtani, Graham Neubig, Eduard Hovy, Eric Nyberg
Portmanteaus are a word formation phenomenon where two words are combined to form a new word.
2 code implementations • EMNLP 2017 • Chaitanya Malaviya, Graham Neubig, Patrick Littell
One central mystery of neural NLP is what neural models "know" about their subject matter.
no code implementations • WS 2017 • Ravich, Abhilasha er, Thomas Manzini, Matthias Grabmair, Graham Neubig, Jonathan Francis, Eric Nyberg
Wang et al. (2015) proposed a method to build semantic parsing datasets by generating canonical utterances using a grammar and having crowdworkers paraphrase them into natural wording.
no code implementations • 1 Aug 2017 • Kartik Goyal, Graham Neubig, Chris Dyer, Taylor Berg-Kirkpatrick
In experiments, we show that optimizing this new training objective yields substantially better results on two sequence tasks (Named Entity Recognition and CCG Supertagging) when compared with both cross entropy trained greedy decoding and cross entropy trained beam decoding baselines.
Ranked #3 on Motion Segmentation on Hopkins155
no code implementations • NAACL 2018 • Frederick Liu, Han Lu, Graham Neubig
Homographs, words with different meanings but the same surface form, have long caused difficulty for machine translation systems, as it is difficult to select the correct translation based on the context.
no code implementations • 15 Sep 2017 • Matthias Sperber, Graham Neubig, Jan Niehues, Satoshi Nakamura, Alex Waibel
We investigate the problem of manually correcting errors from an automatic speech transcript in a cost-sensitive fashion.
no code implementations • IJCNLP 2017 • Jingyi Zhang, Masao Utiyama, Eiichro Sumita, Graham Neubig, Satoshi Nakamura
Compared to traditional statistical machine translation (SMT), neural machine translation (NMT) often sacrifices adequacy for the sake of fluency.
no code implementations • WS 2017 • Toshiaki Nakazawa, Shohei Higashiyama, Chenchen Ding, Hideya Mino, Isao Goto, Hideto Kazawa, Yusuke Oda, Graham Neubig, Sadao Kurohashi
For the WAT2017, 12 institutions participated in the shared tasks.
no code implementations • 6 Dec 2017 • Christy Li, Dimitris Konomis, Graham Neubig, Pengtao Xie, Carol Cheng, Eric Xing
The hope is that the tool can be used to reduce mis-diagnosis.
no code implementations • 11 Dec 2017 • Hao Zhang, Shizhen Xu, Graham Neubig, Wei Dai, Qirong Ho, Guangwen Yang, Eric P. Xing
Recent deep learning (DL) models have moved beyond static network architectures to dynamic ones, handling data where the network structure changes every example, such as sequences of variable lengths, trees, and graphs.
no code implementations • 14 Feb 2018 • Odette Scharenborg, Laurent Besacier, Alan Black, Mark Hasegawa-Johnson, Florian Metze, Graham Neubig, Sebastian Stueker, Pierre Godard, Markus Mueller, Lucas Ondel, Shruti Palaskar, Philip Arthur, Francesco Ciannella, Mingxing Du, Elin Larsen, Danny Merkx, Rachid Riad, Liming Wang, Emmanuel Dupoux
We summarize the accomplishments of a multi-disciplinary workshop exploring the computational and scientific issues surrounding the discovery of linguistic units (subwords and words) in a language without orthography.
1 code implementation • WS 2018 • Graham Neubig, Matthias Sperber, Xinyi Wang, Matthieu Felix, Austin Matthews, Sarguna Padmanabhan, Ye Qi, Devendra Singh Sachan, Philip Arthur, Pierre Godard, John Hewitt, Rachid Riad, Liming Wang
In this paper we describe the design of XNMT and its experiment configuration system, and demonstrate its utility on the tasks of machine translation, speech recognition, and multi-tasked machine translation/parsing.
1 code implementation • TACL 2018 • Jacob Buckman, Graham Neubig
In this work, we propose a new language modeling paradigm that has the ability to perform both prediction and moderation of information flow at multiple granularities: neural lattice language models.
1 code implementation • 26 Mar 2018 • Matthias Sperber, Jan Niehues, Graham Neubig, Sebastian Stüker, Alex Waibel
Self-attention is a method of encoding sequences of vectors by relating these vectors to each-other based on pairwise similarities.
1 code implementation • NAACL 2018 • Yohan Jo, Shivani Poddar, Byungsoo Jeon, Qinlan Shen, Carolyn P. Rose, Graham Neubig
We present a neural architecture for modeling argumentative dialogue that explicitly models the interplay between an Opinion Holder's (OH's) reasoning and a challenger's argument, with the goal of predicting if the argument successfully changes the OH's view.
no code implementations • NAACL 2018 • Jingyi Zhang, Masao Utiyama, Eiichro Sumita, Graham Neubig, Satoshi Nakamura
Specifically, for an input sentence, we use a search engine to retrieve sentence pairs whose source sides are similar with the input sentence, and then collect $n$-grams that are both in the retrieved target sentences and aligned with words that match in the source sentences, which we call "translation pieces".
1 code implementation • NAACL 2018 • Ye Qi, Devendra Singh Sachan, Matthieu Felix, Sarguna Janani Padmanabhan, Graham Neubig
The performance of Neural Machine Translation (NMT) systems often suffers in low-resource scenarios where sufficiently large-scale parallel corpora cannot be obtained.
3 code implementations • ACL 2018 • Xuezhe Ma, Zecong Hu, Jingzhou Liu, Nanyun Peng, Graham Neubig, Eduard Hovy
Combining pointer networks~\citep{vinyals2015pointer} with an internal stack, the proposed model first reads and encodes the whole sentence, then builds the dependency tree top-down (from root-to-leaf) in a depth-first fashion.
Ranked #14 on Dependency Parsing on Penn Treebank
1 code implementation • ACL 2018 • Paul Michel, Graham Neubig
Every person speaks or writes their own flavor of their native language, influenced by a number of factors: the content they tend to talk about, their gender, their social status, or their geographical origin.
1 code implementation • ACL 2018 • Craig Stewart, Nikolai Vogler, Junjie Hu, Jordan Boyd-Graber, Graham Neubig
Simultaneous interpretation, translation of the spoken word in real-time, is both highly challenging and physically demanding.
no code implementations • ACL 2018 • Chaitanya Malaviya, Matthew R. Gormley, Graham Neubig
Morphological analysis involves predicting the syntactic traits of a word (e. g. {POS: Noun, Case: Acc, Gender: Fem}).
no code implementations • 23 May 2018 • Pengcheng Yin, Bowen Deng, Edgar Chen, Bogdan Vasilescu, Graham Neubig
For tasks like code synthesis from natural language, code retrieval, and code summarization, data-driven models have shown great promise.
no code implementations • NAACL 2018 • Graham Neubig, Miltiadis Allamanis
As a result, in the past several years there has been an increasing research interest in methods that focus on the intersection of programming and natural language, allowing users to use natural language to interact with computers in the complex ways that programs allow us to do.
no code implementations • NAACL 2018 • Austin Matthews, Graham Neubig, Chris Dyer
Languages with productive morphology pose problems for language models that generate words from a fixed vocabulary.
1 code implementation • COLING 2018 • Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, Graham Neubig
Natural language inference (NLI) is the task of determining if a natural language hypothesis can be inferred from a given premise in a justifiable manner.
Natural Language Inference Natural Language Understanding +1
no code implementations • WS 2018 • Yuta Nishimura, Katsuhito Sudoh, Graham Neubig, Satoshi Nakamura
This study focuses on the use of incomplete multilingual corpora in multi-encoder NMT and mixture of NMT experts and examines a very simple implementation where missing source translations are replaced by a special symbol <NULL>.
no code implementations • WS 2018 • Alexandra Birch, Andrew Finch, Minh-Thang Luong, Graham Neubig, Yusuke Oda
This document describes the findings of the Second Workshop on Neural Machine Translation and Generation, held in concert with the annual conference of the Association for Computational Linguistics (ACL 2018).
7 code implementations • ACL 2018 • Pengcheng Yin, Chunting Zhou, Junxian He, Graham Neubig
Semantic parsing is the task of transducing natural language (NL) utterances into formal meaning representations (MRs), commonly represented as tree structures.
1 code implementation • ACL 2018 • Harsh Jhamtani, Varun Gangal, Eduard Hovy, Graham Neubig, Taylor Berg-Kirkpatrick
This paper examines the problem of generating natural language descriptions of chess games.
1 code implementation • EMNLP 2018 • Graham Neubig, Junjie Hu
This paper examines the problem of adapting neural machine translation systems to new, low-resourced languages (LRLs) as effectively and rapidly as possible.
no code implementations • EMNLP 2018 • Xinyi Wang, Hieu Pham, Zihang Dai, Graham Neubig
In this work, we examine methods for data augmentation for text-based tasks such as neural machine translation (NMT).
1 code implementation • EMNLP 2018 • Emmanouil Antonios Platanios, Mrinmaya Sachan, Graham Neubig, Tom Mitchell
We propose a simple modification to existing neural machine translation (NMT) models that enables using a single universal model to translate between multiple languages while allowing for language specific parameterization, and that can also be used for domain adaptation.
1 code implementation • EMNLP 2018 • Aditi Chaudhary, Chunting Zhou, Lori Levin, Graham Neubig, David R. Mortensen, Jaime G. Carbonell
Much work in Natural Language Processing (NLP) has been for resource-rich languages, making generalization to new, less-resourced languages challenging.
1 code implementation • EMNLP 2018 • Junxian He, Graham Neubig, Taylor Berg-Kirkpatrick
In this work, we propose a novel generative model that jointly learns discrete syntactic structure and continuous word representations in an unsupervised fashion by cascading an invertible neural network with a structured generative prior.
1 code implementation • EMNLP 2018 • Xinyi Wang, Hieu Pham, Pengcheng Yin, Graham Neubig
Recent advances in Neural Machine Translation (NMT) show that adding syntactic information to NMT systems can improve the quality of their translations.
1 code implementation • EMNLP 2018 • Shirley Anugrah Hayati, Raphael Olivier, Pravalika Avvaru, Pengcheng Yin, Anthony Tomasic, Graham Neubig
In models to generate program source code from natural language, representing this code in a tree structure has been a common approach.
1 code implementation • EMNLP 2018 • Jiateng Xie, Zhilin Yang, Graham Neubig, Noah A. Smith, Jaime Carbonell
To improve robustness to word order differences, we propose to use self-attention, which allows for a degree of flexibility with respect to word order.
1 code implementation • WS 2018 • Devendra Singh Sachan, Graham Neubig
In multilingual neural machine translation, it has been shown that sharing a single translation model between multiple languages can achieve competitive performance, sometimes even leading to performance gains over bilingually trained models.
1 code implementation • WS 2018 • Junjie Hu, Wei-Cheng Chang, Yuexin Wu, Graham Neubig
In this paper, propose a method to effectively encode the local and global contextual information for each target word using a three-part neural network approach.
2 code implementations • EMNLP 2018 • Paul Michel, Graham Neubig
In this paper, we propose a benchmark dataset for Machine Translation of Noisy Text (MTNT), consisting of noisy comments on Reddit (www. reddit. com) and professionally sourced translations.
no code implementations • 27 Sep 2018 • Barun Patra, Joel Ruben Antony Moniz, Sarthak Garg, Matthew R Gormley, Graham Neubig
We then propose Bilingual Lexicon Induction with Semi-Supervision (BLISS) --- a novel semi-supervised approach that relaxes the isometric assumption while leveraging both limited aligned bilingual lexicons and a larger set of unaligned word embeddings, as well as a novel hubness filtering technique.
no code implementations • 27 Sep 2018 • Danish Pruthi, Mansi Gupta, Nitish Kumar Kulkarni, Graham Neubig, Eduard Hovy
Neural models achieve state-of-the-art performance due to their ability to extract salient features useful to downstream tasks.
4 code implementations • EMNLP 2018 • Pengcheng Yin, Graham Neubig
We present TRANX, a transition-based neural semantic parser that maps natural language (NL) utterances into formal meaning representations (MRs).
Ranked #2 on Semantic Parsing on ATIS
no code implementations • IWSLT (EMNLP) 2018 • Yuta Nishimura, Katsuhito Sudoh, Graham Neubig, Satoshi Nakamura
By using information from these multiple sources, these systems achieve large gains in accuracy.
no code implementations • 19 Oct 2018 • Elizabeth Salesky, Andrew Runge, Alex Coda, Jan Niehues, Graham Neubig
However, the granularity of these subword units is a hyperparameter to be tuned for each language and task, using methods such as grid search.
2 code implementations • ICLR 2019 • Pengcheng Yin, Graham Neubig, Miltiadis Allamanis, Marc Brockschmidt, Alexander L. Gaunt
We introduce the problem of learning distributed representations of edits.
1 code implementation • 1 Nov 2018 • Shonosuke Ishiwatari, Hiroaki Hayashi, Naoki Yoshinaga, Graham Neubig, Shoetsu Sato, Masashi Toyoda, Masaru Kitsuregawa
When reading a text, it is common to become stuck on unfamiliar words and phrases, such as polysemous words with novel senses, rarely used idioms, internet slang, or emerging entities.
1 code implementation • 9 Nov 2018 • Shruti Rijhwani, Jiateng Xie, Graham Neubig, Jaime Carbonell
To address this problem, we investigate zero-shot cross-lingual entity linking, in which we assume no bilingual lexical resources are available in the source low-resource language.
no code implementations • 13 Dec 2018 • Graham Neubig, Patrick Littell, Chian-Yu Chen, Jean Lee, Zirui Li, Yu-Hsiang Lin, Yuyan Zhang
In this extended abstract, we describe the beginnings of a new project that will attempt to ease this language documentation process through the use of natural language processing (NLP) technology.
2 code implementations • ICLR 2019 • Junxian He, Daniel Spokoyny, Graham Neubig, Taylor Berg-Kirkpatrick
The variational autoencoder (VAE) is a popular combination of deep latent variable model and accompanying variational learning technique.
Ranked #1 on Text Generation on Yahoo Questions
no code implementations • 22 Jan 2019 • Xiang Kong, Bohan Li, Graham Neubig, Eduard Hovy, Yiming Yang
In this work, we propose a method for neural dialogue response generation that allows not only generating semantically reasonable responses according to the dialogue history, but also explicitly controlling the sentiment of the response via sentiment labels.
1 code implementation • ICLR 2019 • Xinyi Wang, Hieu Pham, Philip Arthur, Graham Neubig
Multilingual training of neural machine translation (NMT) systems has led to impressive accuracy improvements on low-resource languages.
no code implementations • 24 Feb 2019 • Aditi Chaudhary, Siddharth Dalmia, Junjie Hu, Xinjian Li, Austin Matthews, Aldrian Obaja Muis, Naoki Otani, Shruti Rijhwani, Zaid Sheikh, Nidhi Vyas, Xinyi Wang, Jiateng Xie, Ruochen Xu, Chunting Zhou, Peter J. Jansen, Yiming Yang, Lori Levin, Florian Metze, Teruko Mitamura, David R. Mortensen, Graham Neubig, Eduard Hovy, Alan W. black, Jaime Carbonell, Graham V. Horwood, Shabnam Tafreshi, Mona Diab, Efsun S. Kayi, Noura Farra, Kathleen McKeown
This paper describes the ARIEL-CMU submissions to the Low Resource Human Language Technologies (LoReHLT) 2018 evaluations for the tasks Machine Translation (MT), Entity Discovery and Linking (EDL), and detection of Situation Frames in Text and Speech (SF Text and Speech).
1 code implementation • NAACL 2019 • Vaibhav Vaibhav, Sumeet Singh, Craig Stewart, Graham Neubig
Modern Machine Translation (MT) systems perform consistently well on clean, in-domain text.
1 code implementation • NAACL 2019 • Paul Michel, Xi-An Li, Graham Neubig, Juan Miguel Pino
Adversarial examples --- perturbations to the input of a model that elicit large changes in the output --- have been shown to be an effective way of assessing the robustness of sequence-to-sequence (seq2seq) models.
2 code implementations • NAACL 2019 • Graham Neubig, Zi-Yi Dou, Junjie Hu, Paul Michel, Danish Pruthi, Xinyi Wang, John Wieting
In this paper, we describe compare-mt, a tool for holistic analysis and comparison of the results of systems for language generation tasks such as machine translation.
1 code implementation • NAACL 2019 • Emmanouil Antonios Platanios, Otilia Stretcu, Graham Neubig, Barnabas Poczos, Tom M. Mitchell
In this paper, we propose a curriculum learning framework for NMT that reduces training time, reduces the need for specialized heuristics or large batch sizes, and results in overall better performance.
1 code implementation • NAACL 2019 • Nikolai Vogler, Craig Stewart, Graham Neubig
Simultaneous interpretation, the translation of speech from one language to another in real-time, is an inherently difficult and strenuous task.
1 code implementation • NAACL 2019 • Chunting Zhou, Xuezhe Ma, Di Wang, Graham Neubig
Recent approaches to cross-lingual word embedding have generally been based on linear transformations between the sets of embedding vectors in the two languages.
no code implementations • TACL 2019 • Matthias Sperber, Graham Neubig, Jan Niehues, Alex Waibel
Speech translation has traditionally been approached through cascaded models consisting of a speech recognizer trained on a corpus of transcribed speech, and a machine translation system trained on parallel texts.
no code implementations • ICLR 2019 • Paul Michel, Graham Neubig, Xi-An Li, Juan Miguel Pino
Adversarial examples have been shown to be an effective way of assessing the robustness of neural sequence-to-sequence (seq2seq) models, by applying perturbations to the input of a model leading to large degradation in performance.
no code implementations • ACL 2019 • Xinyi Wang, Graham Neubig
To improve low-resource Neural Machine Translation (NMT) with multilingual corpora, training on the most related high-resource language only is often more effective than using all data available (Neubig and Hu, 2018).
3 code implementations • NeurIPS 2019 • Paul Michel, Omer Levy, Graham Neubig
Attention is a powerful and ubiquitous mechanism for allowing neural models to focus on particular salient pieces of information by taking their weighted average when making predictions.
1 code implementation • ACL 2019 • Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell, Graham Neubig
Cross-lingual transfer, where a high-resource transfer language is used to improve the accuracy of a low-resource task language, is now an invaluable tool for improving performance of natural language processing (NLP) on low-resource languages.
1 code implementation • ACL 2019 • Zhengbao Jiang, Pengcheng Yin, Graham Neubig
We found that the extraction likelihood, a confidence measure used by current supervised open IE systems, is not well calibrated when comparing the quality of assertions extracted from different sentences.
no code implementations • NAACL 2019 • Shonosuke Ishiwatari, Hiroaki Hayashi, Naoki Yoshinaga, Graham Neubig, Shoetsu Sato, Masashi Toyoda, Masaru Kitsuregawa
When reading a text, it is common to become stuck on unfamiliar words and phrases, such as polysemous words with novel senses, rarely used idioms, internet slang, or emerging entities.
2 code implementations • ACL 2019 • Junjie Hu, Mengzhou Xia, Graham Neubig, Jaime Carbonell
It has been previously noted that neural machine translation (NMT) is very sensitive to domain shift.
no code implementations • ACL 2019 • Matthias Sperber, Graham Neubig, Ngoc-Quan Pham, Alex Waibel
Lattices are an efficient and effective method to encode ambiguity of upstream systems in natural language processing tasks, for example to compactly capture multiple speech recognition hypotheses, or to represent multiple linguistic analyses.
1 code implementation • ACL 2019 • Junxian He, Zhisong Zhang, Taylor Berg-Kirkpatrick, Graham Neubig
The parameters of source model and target model are softly shared through a regularized log likelihood objective.
no code implementations • ACL 2019 • Mengzhou Xia, Xiang Kong, Antonios Anastasopoulos, Graham Neubig
Translation to or from low-resource languages LRLs poses challenges for machine translation in terms of both adequacy and fluency.
1 code implementation • WS 2019 • Xi-An Li, Paul Michel, Antonios Anastasopoulos, Yonatan Belinkov, Nadir Durrani, Orhan Firat, Philipp Koehn, Graham Neubig, Juan Pino, Hassan Sajjad
We share the findings of the first shared task on improving robustness of Machine Translation (MT).
no code implementations • ACL 2019 • Pengcheng Yin, Graham Neubig
Semantic parsing considers the task of transducing natural language (NL) utterances into machine executable meaning representations (MRs).
Ranked #4 on Code Generation on Django
no code implementations • ACL 2019 • John Wieting, Taylor Berg-Kirkpatrick, Kevin Gimpel, Graham Neubig
While most neural machine translation (NMT)systems are still trained using maximum likelihood estimation, recent work has demonstrated that optimizing systems to directly improve evaluation metrics such as BLEU can significantly improve final translation accuracy.
1 code implementation • WS 2019 • Shuyan Zhou, Xiangkai Zeng, Yingqi Zhou, Antonios Anastasopoulos, Graham Neubig
While neural machine translation (NMT) achieves remarkable performance on clean, in-domain text, performance is known to degrade drastically when facing text which is full of typos, grammatical errors and other varieties of noise.
no code implementations • 8 Aug 2019 • Denis Peskov, Joe Barrow, Pedro Rodriguez, Graham Neubig, Jordan Boyd-Graber
We investigate and mitigate the effects of noise from Automatic Speech Recognition systems on two factoid Question Answering (QA) tasks.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +6
4 code implementations • IJCNLP 2019 • Antonios Anastasopoulos, Graham Neubig
Recent years have seen exceptional strides in the task of automatic morphological inflection generation.
1 code implementation • ACL 2019 • Barun Patra, Joel Ruben Antony Moniz, Sarthak Garg, Matthew R. Gormley, Graham Neubig
We then propose Bilingual Lexicon Induction with Semi-Supervision (BLISS) --- a semi-supervised approach that relaxes the isometric assumption while leveraging both limited aligned bilingual lexicons and a larger set of unaligned word embeddings, as well as a novel hubness filtering technique.
no code implementations • 21 Aug 2019 • Hiroaki Hayashi, Zecong Hu, Chenyan Xiong, Graham Neubig
In this paper, we propose Latent Relation Language Models (LRLMs), a class of language models that parameterizes the joint distribution over the words in a document and the entities that occur therein via knowledge graph relations.
1 code implementation • IJCNLP 2019 • Aditi Chaudhary, Jiateng Xie, Zaid Sheikh, Graham Neubig, Jaime G. Carbonell
Most state-of-the-art models for named entity recognition (NER) rely on the availability of large amounts of labeled data, making them challenging to extend to new, lower-resourced languages.
1 code implementation • IJCNLP 2019 • Zi-Yi Dou, Junjie Hu, Antonios Anastasopoulos, Graham Neubig
The recent success of neural machine translation models relies on the availability of high quality, in-domain data.
1 code implementation • IJCNLP 2019 • Chunting Zhou, Xuezhe Ma, Junjie Hu, Graham Neubig
Despite impressive empirical successes of neural machine translation (NMT) on standard benchmarks, limited parallel data impedes the application of NMT models to many language pairs.
no code implementations • WS 2019 • Bhargavi Paranjape, Graham Neubig
Utterance-level analysis of the speaker{'}s intentions and emotions is a core task in conversational understanding.
1 code implementation • IJCNLP 2019 • Bohan Li, Junxian He, Graham Neubig, Taylor Berg-Kirkpatrick, Yiming Yang
In this paper, we investigate a simple fix for posterior collapse which yields surprisingly effective results.
2 code implementations • IJCNLP 2019 • Xuezhe Ma, Chunting Zhou, Xi-An Li, Graham Neubig, Eduard Hovy
Most sequence-to-sequence (seq2seq) models are autoregressive; they generate each token by conditioning on previously generated tokens.
Ranked #3 on Machine Translation on WMT2016 English-Romanian
1 code implementation • 11 Sep 2019 • Junjie Hu, Yu Cheng, Zhe Gan, Jingjing Liu, Jianfeng Gao, Graham Neubig
Previous storytelling approaches mostly focused on optimizing traditional metrics such as BLEU, ROUGE and CIDEr.
Ranked #10 on Visual Storytelling on VIST
1 code implementation • 14 Sep 2019 • John Wieting, Taylor Berg-Kirkpatrick, Kevin Gimpel, Graham Neubig
While most neural machine translation (NMT) systems are still trained using maximum likelihood estimation, recent work has demonstrated that optimizing systems to directly improve evaluation metrics such as BLEU can substantially improve final translation accuracy.
3 code implementations • ACL 2020 • Danish Pruthi, Mansi Gupta, Bhuwan Dhingra, Graham Neubig, Zachary C. Lipton
Attention mechanisms are ubiquitous components in neural architectures applied to natural language processing.
no code implementations • 25 Sep 2019 • Paul Michel, Elisabeth Salesky, Graham Neubig
Regularization-based continual learning approaches generally prevent catastrophic forgetting by augmenting the training loss with an auxiliary objective.
1 code implementation • WS 2019 • Shuyan Zhou, Shruti Rijhwani, Graham Neubig
Cross-lingual entity linking (XEL) grounds named entities in a source language to an English Knowledge Base (KB), such as Wikipedia.
4 code implementations • ACL 2019 • John Wieting, Kevin Gimpel, Graham Neubig, Taylor Berg-Kirkpatrick
We present a model and methodology for learning paraphrastic sentence embeddings directly from bitext, removing the time-consuming intermediate step of creating paraphrase corpora.
1 code implementation • WS 2019 • Zi-Yi Dou, Xinyi Wang, Junjie Hu, Graham Neubig
We then use these learned domain differentials to adapt models for the target task accordingly.
2 code implementations • ICLR 2020 • Zirui Wang, Jiateng Xie, Ruochen Xu, Yiming Yang, Graham Neubig, Jaime Carbonell
Learning multilingual representations of text has proven a successful method for many cross-lingual transfer learning tasks.
no code implementations • WS 2019 • Hiroaki Hayashi, Yusuke Oda, Alexandra Birch, Ioannis Konstas, Andrew Finch, Minh-Thang Luong, Graham Neubig, Katsuhito Sudoh
This document describes the findings of the Third Workshop on Neural Generation and Translation, held in concert with the annual conference of the Empirical Methods in Natural Language Processing (EMNLP 2019).
no code implementations • CONLL 2019 • Austin Matthews, Graham Neubig, Chris Dyer
Recurrent neural network grammars generate sentences using phrase-structure syntax and perform very well on both parsing and language modeling.
no code implementations • ICLR 2020 • Chunting Zhou, Graham Neubig, Jiatao Gu
We find that knowledge distillation can reduce the complexity of data sets and help NAT to model the variations in the output data.
1 code implementation • ACL 2020 • Antonios Anastasopoulos, Graham Neubig
Most of recent work in cross-lingual word embeddings is severely Anglocentric.
2 code implementations • EMNLP 2020 • John Wieting, Graham Neubig, Taylor Berg-Kirkpatrick
Semantic sentence embedding models encode natural language sentences into vectors, such that closeness in embedding space indicates closeness in the semantics between the sentences.
3 code implementations • ACL 2020 • Zhengbao Jiang, Wei Xu, Jun Araki, Graham Neubig
Natural language processing covers a wide variety of tasks predicting syntax, semantics, and information content, and usually each type of output is generated with specially designed architectures.
Ranked #1 on Relation Extraction on WLPC
Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +8
1 code implementation • ICML 2020 • Xinyi Wang, Hieu Pham, Paul Michel, Antonios Anastasopoulos, Jaime Carbonell, Graham Neubig
To acquire a new skill, humans learn better and faster if a tutor, based on their current knowledge level, informs them of how much attention they should pay to particular content or practice problems.
1 code implementation • TACL 2020 • Zhengbao Jiang, Frank F. Xu, Jun Araki, Graham Neubig
Recent work has presented intriguing results examining the knowledge contained in language models (LM) by having the LM fill in the blanks of prompts such as "Obama is a _ by profession".
1 code implementation • 29 Nov 2019 • Ansong Ni, Pengcheng Yin, Graham Neubig
Experiments on WikiTableQuestions with human annotators show that our method can improve the performance with only 100 active queries, especially for weakly-supervised parsers learnt from a cold start.
5 code implementations • ICLR 2020 • Junxian He, Xinyi Wang, Graham Neubig, Taylor Berg-Kirkpatrick
Across all style transfer tasks, our approach yields substantial gains over state-of-the-art non-generative baselines, including the state-of-the-art unsupervised machine translation techniques that our approach generalizes.
1 code implementation • AKBC 2020 • Zhengbao Jiang, Jun Araki, Donghan Yu, Ruohong Zhang, Wei Xu, Yiming Yang, Graham Neubig
We propose several methods that incorporate both structured and textual information to represent relations for this task.
1 code implementation • ICLR 2020 • Bhuwan Dhingra, Manzil Zaheer, Vidhisha Balachandran, Graham Neubig, Ruslan Salakhutdinov, William W. Cohen
In particular, we describe a neural module, DrKIT, that traverses textual data like a KB, softly following paths of relations between mentions of entities in the corpus.
1 code implementation • 26 Feb 2020 • Xinjian Li, Siddharth Dalmia, Juncheng Li, Matthew Lee, Patrick Littell, Jiali Yao, Antonios Anastasopoulos, David R. Mortensen, Graham Neubig, Alan W. black, Florian Metze
Multilingual models can improve language processing, particularly for low resource situations, by sharing parameters across languages.
1 code implementation • TACL 2020 • Shuyan Zhou, Shruti Rijhawani, John Wieting, Jaime Carbonell, Graham Neubig
Cross-lingual entity linking (XEL) is the task of finding referents in a target-language knowledge base (KB) for mentions extracted from source-language texts.
4 code implementations • 24 Mar 2020 • Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, Melvin Johnson
However, these broad-coverage benchmarks have been mostly limited to English, and despite an increasing interest in multilingual models, a benchmark that enables the comprehensive evaluation of such methods on a diverse range of languages and tasks is still missing.
1 code implementation • 3 Apr 2020 • Samuel Läubli, Sheila Castilho, Graham Neubig, Rico Sennrich, Qinlan Shen, Antonio Toral
The quality of machine translation has increased remarkably over the past years, to the degree that it was found to be indistinguishable from professional human translation in a number of empirical investigations.
1 code implementation • EMNLP 2020 • Zi-Yi Dou, Antonios Anastasopoulos, Graham Neubig
Back-translation has proven to be an effective method to utilize monolingual data in neural machine translation (NMT), and iteratively conducting back-translation can further improve the model performance.
2 code implementations • 14 Apr 2020 • Keita Kurita, Paul Michel, Graham Neubig
We show that by applying a regularization method, which we call RIPPLe, and an initialization procedure, which we call Embedding Surgery, such attacks are possible even with limited knowledge of the dataset and fine-tuning procedure.
2 code implementations • ACL 2020 • Xinyi Wang, Yulia Tsvetkov, Graham Neubig
When training multilingual machine translation (MT) models that can translate to/from multiple languages, we are faced with imbalanced training sets: some languages have much more training data than others.
no code implementations • LREC 2020 • David R. Mortensen, Xinjian Li, Patrick Littell, Alexis Michaud, Shruti Rijhwani, Antonios Anastasopoulos, Alan W. black, Florian Metze, Graham Neubig
While phonemic representations are language specific, phonetic representations (stated in terms of (allo)phones) are much closer to a universal (language-independent) transcription.
2 code implementations • ACL 2020 • Frank F. Xu, Zhengbao Jiang, Pengcheng Yin, Bogdan Vasilescu, Graham Neubig
Open-domain code generation aims to generate code in a general-purpose programming language (such as Python) from natural language (NL) intents.
Ranked #3 on Code Generation on CoNaLa-Ext
1 code implementation • 24 Apr 2020 • Aman Madaan, Shruti Rijhwani, Antonios Anastasopoulos, Yiming Yang, Graham Neubig
We propose a method of curating high-quality comparable training data for low-resource languages with monolingual annotators.
no code implementations • LREC 2020 • Graham Neubig, Shruti Rijhwani, Alexis Palmer, Jordan MacKenzie, Hilaria Cruz, Xinjian Li, Matthew Lee, Aditi Chaudhary, Luke Gessler, Steven Abney, Shirley Anugrah Hayati, Antonios Anastasopoulos, Olga Zamaraeva, Emily Prud'hommeaux, Jennette Child, Sara Child, Rebecca Knowles, Sarah Moeller, Jeffrey Micher, Yiyuan Li, Sydney Zink, Mengzhou Xia, Roshan S Sharma, Patrick Littell
Despite recent advances in natural language processing and other language technology, the application of such technology to language documentation and conservation has been limited.
2 code implementations • ACL 2020 • Aman Madaan, Amrith Setlur, Tanmay Parekh, Barnabas Poczos, Graham Neubig, Yiming Yang, Ruslan Salakhutdinov, Alan W. black, Shrimai Prabhumoye
This paper introduces a new task of politeness transfer which involves converting non-polite sentences to polite sentences while preserving the meaning.
1 code implementation • EMNLP (nlpbt) 2020 • Frank F. Xu, Lei Ji, Botian Shi, Junyi Du, Graham Neubig, Yonatan Bisk, Nan Duan
Watching instructional videos are often used to learn about procedures.
1 code implementation • ACL 2020 • Mengzhou Xia, Antonios Anastasopoulos, Ruochen Xu, Yiming Yang, Graham Neubig
Given the complexity of combinations of tasks, languages, and domains in natural language processing (NLP) research, it is computationally prohibitive to exhaustively test newly proposed models on each possible experimental setting.
1 code implementation • ACL 2020 • Shruti Rijhwani, Shuyan Zhou, Graham Neubig, Jaime Carbonell
However, designing such features for low-resource languages is challenging, because exhaustive entity gazetteers do not exist in these languages.
1 code implementation • ACL 2020 • Pengcheng Yin, Graham Neubig, Wen-tau Yih, Sebastian Riedel
Recent years have witnessed the burgeoning of pretrained language models (LMs) for text-based natural language (NL) understanding tasks.
Ranked #10 on Text-To-SQL on spider (Exact Match Accuracy (Dev) metric)
1 code implementation • NeurIPS 2020 • Junxian He, Taylor Berg-Kirkpatrick, Graham Neubig
While effective, these methods are inefficient at test time as a result of needing to store and index the entire training corpus.
no code implementations • WS 2020 • Kenneth Heafield, Hiroaki Hayashi, Yusuke Oda, Ioannis Konstas, Andrew Finch, Graham Neubig, Xi-An Li, Alex Birch, ra
We describe the finding of the Fourth Workshop on Neural Generation and Translation, held in concert with the annual conference of the Association for Computational Linguistics (ACL 2020).
no code implementations • ACL 2020 • Keita Kurita, Paul Michel, Graham Neubig
Recently, NLP has seen a surge in the usage of large pre-trained models.
no code implementations • WS 2020 • Nikitha Murikinati, Antonios Anastasopoulos, Graham Neubig
Cross-lingual transfer between typologically related languages has been proven successful for the task of morphological inflection.
no code implementations • EMNLP (NLP-COVID19) 2020 • Antonios Anastasopoulos, Alessandro Cattelan, Zi-Yi Dou, Marcello Federico, Christian Federman, Dmitriy Genzel, Francisco Guzmán, Junjie Hu, Macduff Hughes, Philipp Koehn, Rosie Lazar, Will Lewis, Graham Neubig, Mengmeng Niu, Alp Öktem, Eric Paquin, Grace Tang, Sylwia Tur
Further, the team is converting the test and development data into translation memories (TMXs) that can be used by localizers from and to any of the languages.
3 code implementations • 29 Jul 2020 • Hao Zhu, Yonatan Bisk, Graham Neubig
In this paper we demonstrate that $\textit{context free grammar (CFG) based methods for grammar induction benefit from modeling lexical dependencies}$.
1 code implementation • EMNLP 2020 • Aditi Chaudhary, Antonios Anastasopoulos, Adithya Pratapa, David R. Mortensen, Zaid Sheikh, Yulia Tsvetkov, Graham Neubig
Using cross-lingual transfer, even with no expert annotations in the language of interest, our framework extracts a grammatical specification which is nearly equivalent to those created with large amounts of gold-standard annotated data.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Luyu Gao, Xinyi Wang, Graham Neubig
To improve the performance of Neural Machine Translation~(NMT) for low-resource languages~(LRL), one effective strategy is to leverage parallel data from a related high-resource language~(HRL).
1 code implementation • EMNLP 2020 • Zhengbao Jiang, Antonios Anastasopoulos, Jun Araki, Haibo Ding, Graham Neubig
We further propose a code-switching-based method to improve the ability of multilingual LMs to access knowledge, and verify its effectiveness on several benchmark languages.
1 code implementation • EMNLP 2020 • Manik Bhandari, Pranav Gour, Atabak Ashfaq, PengFei Liu, Graham Neubig
Automated evaluation metrics as a stand-in for manual evaluation are an essential part of the development of text-generation tasks such as text summarization.
1 code implementation • NAACL 2021 • Zi-Yi Dou, PengFei Liu, Hiroaki Hayashi, Zhengbao Jiang, Graham Neubig
Neural abstractive summarization models are flexible and can produce coherent summaries, but they are sometimes unfaithful and can be difficult to control.
no code implementations • NAACL 2021 • Junjie Hu, Melvin Johnson, Orhan Firat, Aditya Siddhant, Graham Neubig
Pre-trained cross-lingual encoders such as mBERT (Devlin et al., 2019) and XLMR (Conneau et al., 2020) have proven to be impressively effective at enabling transfer-learning of NLP systems from high-resource languages to low-resource languages.
1 code implementation • EMNLP 2020 • Sai Muralidhar Jayanthi, Danish Pruthi, Graham Neubig
We introduce NeuSpell, an open-source toolkit for spelling correction in English.
1 code implementation • NAACL 2021 • Yixin Liu, Graham Neubig, John Wieting
In most cases, the lack of parallel corpora makes it impossible to directly train supervised models for the text style transfer task.
no code implementations • 2 Nov 2020 • Aditi Chaudhary, Antonios Anastasopoulos, Zaid Sheikh, Graham Neubig
Active learning (AL) uses a data selection algorithm to select useful training samples to minimize annotation cost.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Danish Pruthi, Bhuwan Dhingra, Graham Neubig, Zachary C. Lipton
For many prediction tasks, stakeholders desire not only predictions but also supporting evidence that a human can use to verify its correctness.
2 code implementations • Findings (ACL) 2021 • Chunting Zhou, Graham Neubig, Jiatao Gu, Mona Diab, Paco Guzman, Luke Zettlemoyer, Marjan Ghazvininejad
Neural sequence models can generate highly fluent sentences, but recent studies have also shown that they are also prone to hallucinate additional content not supported by the input.
2 code implementations • EMNLP 2020 • Shruti Rijhwani, Antonios Anastasopoulos, Graham Neubig
There is little to no data available to build natural language processing models for most endangered languages.