1 code implementation • Findings (ACL) 2022 • Jie Ma, Miguel Ballesteros, Srikanth Doss, Rishita Anubhai, Sunil Mallya, Yaser Al-Onaizan, Dan Roth
We study the problem of few shot learning for named entity recognition.
no code implementations • Findings (ACL) 2021 • Elsbeth Turcan, Shuai Wang, Rishita Anubhai, Kasturi Bhattacharjee, Yaser Al-Onaizan, Smaranda Muresan
Detecting what emotions are expressed in text is a well-studied problem in natural language processing.
no code implementations • 30 Nov 2020 • Shang-Wen Li, Jason Krone, Shuyan Dong, Yi Zhang, Yaser Al-Onaizan
Recently deep learning has dominated many machine learning areas, including spoken language understanding (SLU).
no code implementations • EMNLP 2020 • Kasturi Bhattacharjee, Miguel Ballesteros, Rishita Anubhai, Smaranda Muresan, Jie Ma, Faisal Ladhak, Yaser Al-Onaizan
Leveraging large amounts of unlabeled data using Transformer-like architectures, like BERT, has gained popularity in recent times owing to their effectiveness in learning general representations that can then be further fine-tuned for downstream tasks to much success.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Jie Ma, Shuai Wang, Rishita Anubhai, Miguel Ballesteros, Yaser Al-Onaizan
(2) Capturing the long-range dependency, specifically, the connection between an event trigger and a distant event argument.
1 code implementation • ACL 2020 • Arjun R. Akula, Spandana Gella, Yaser Al-Onaizan, Song-Chun Zhu, Siva Reddy
To measure the true progress of existing models, we split the test set into two sets, one which requires reasoning on linguistic structure and the other which doesn't.
1 code implementation • ACL 2020 • Faisal Ladhak, Bryan Li, Yaser Al-Onaizan, Kathleen McKeown
We present a new summarization task, generating summaries of novel chapters using summary/chapter pairs from online study guides.
no code implementations • ACL 2020 • Xing Niu, Prashant Mathur, Georgiana Dinu, Yaser Al-Onaizan
Neural Machine Translation (NMT) models are sensitive to small perturbations in the input.
no code implementations • WS 2020 • Georgiana Dinu, Prashant Mathur, Marcello Federico, Stanislas Lauly, Yaser Al-Onaizan
A variety of natural language tasks require processing of textual data which contains a mix of natural language and formal languages such as mathematical expressions.
no code implementations • EMNLP 2020 • Miguel Ballesteros, Rishita Anubhai, Shuai Wang, Nima Pourdamghani, Yogarshi Vyas, Jie Ma, Parminder Bhatia, Kathleen McKeown, Yaser Al-Onaizan
In this paper, we propose a neural architecture and a set of training methods for ordering events by predicting temporal relations.
no code implementations • WS 2019 • Sravan Bodapati, Hyokun Yun, Yaser Al-Onaizan
Robustness to capitalization errors is a highly desirable characteristic of named entity recognizers, yet we find standard models for the task are surprisingly brittle to such noise.
no code implementations • WS 2019 • Sravan Babu Bodapati, Spandana Gella, Kasturi Bhattacharjee, Yaser Al-Onaizan
User generated text on social media often suffers from a lot of undesired characteristics including hatespeech, abusive language, insults etc.
no code implementations • ACL 2019 • Kalpit Dixit, Yaser Al-Onaizan
Relation Extraction is the task of identifying entity mention spans in raw text and then identifying relations between pairs of the entity mentions.
Ranked #1 on Relation Extraction on ACE 2005 (Sentence Encoder metric)
1 code implementation • ACL 2019 • Georgiana Dinu, Prashant Mathur, Marcello Federico, Yaser Al-Onaizan
This paper proposes a novel method to inject custom terminology into neural machine translation at run time.
no code implementations • EMNLP 2017 • Miguel Ballesteros, Yaser Al-Onaizan
We present a transition-based AMR parser that directly generates AMR parses from plain text.
Ranked #11 on AMR Parsing on LDC2014T12
no code implementations • 12 Jun 2017 • Baskaran Sankaran, Markus Freitag, Yaser Al-Onaizan
Usually, the candidate lists are a combination of external word-to-word aligner, phrase table entries or most frequent words.
1 code implementation • WS 2017 • Markus Freitag, Yaser Al-Onaizan
In this paper, we concentrate on speeding up the decoder by applying a more flexible beam search strategy whose candidate size may vary at each time step depending on the candidate scores.
no code implementations • 6 Feb 2017 • Markus Freitag, Yaser Al-Onaizan, Baskaran Sankaran
Knowledge distillation describes a method for training a student network to perform better by learning from a stronger teacher network.
no code implementations • 20 Dec 2016 • Markus Freitag, Yaser Al-Onaizan
The basic concept in NMT is to train a large Neural Network that maximizes the translation performance on a given parallel corpus.
no code implementations • 9 Aug 2016 • Baskaran Sankaran, Haitao Mi, Yaser Al-Onaizan, Abe Ittycheriah
Attention-based Neural Machine Translation (NMT) models suffer from attention deficiency issues as has been observed in recent research.
no code implementations • EMNLP 2016 • Orhan Firat, Baskaran Sankaran, Yaser Al-Onaizan, Fatos T. Yarman Vural, Kyunghyun Cho
In this paper, we propose a novel finetuning algorithm for the recently introduced multi-way, mulitlingual neural machine translate that enables zero-resource machine translation.