Search Results for author: Yaser Al-Onaizan

Found 22 papers, 5 papers with code

To BERT or Not to BERT: Comparing Task-specific and Task-agnostic Semi-Supervised Approaches for Sequence Tagging

no code implementations EMNLP 2020 Kasturi Bhattacharjee, Miguel Ballesteros, Rishita Anubhai, Smaranda Muresan, Jie Ma, Faisal Ladhak, Yaser Al-Onaizan

Leveraging large amounts of unlabeled data using Transformer-like architectures, like BERT, has gained popularity in recent times owing to their effectiveness in learning general representations that can then be further fine-tuned for downstream tasks to much success.

Words aren't enough, their order matters: On the Robustness of Grounding Visual Referring Expressions

1 code implementation ACL 2020 Arjun R. Akula, Spandana Gella, Yaser Al-Onaizan, Song-Chun Zhu, Siva Reddy

To measure the true progress of existing models, we split the test set into two sets, one which requires reasoning on linguistic structure and the other which doesn't.

Contrastive Learning Multi-Task Learning +2

Exploring Content Selection in Summarization of Novel Chapters

1 code implementation ACL 2020 Faisal Ladhak, Bryan Li, Yaser Al-Onaizan, Kathleen McKeown

We present a new summarization task, generating summaries of novel chapters using summary/chapter pairs from online study guides.

Extractive Summarization News Summarization

Joint translation and unit conversion for end-to-end localization

no code implementations WS 2020 Georgiana Dinu, Prashant Mathur, Marcello Federico, Stanislas Lauly, Yaser Al-Onaizan

A variety of natural language tasks require processing of textual data which contains a mix of natural language and formal languages such as mathematical expressions.

Data Augmentation Translation

Robustness to Capitalization Errors in Named Entity Recognition

no code implementations WS 2019 Sravan Bodapati, Hyokun Yun, Yaser Al-Onaizan

Robustness to capitalization errors is a highly desirable characteristic of named entity recognizers, yet we find standard models for the task are surprisingly brittle to such noise.

Data Augmentation named-entity-recognition +2

Neural Word Decomposition Models for Abusive Language Detection

no code implementations WS 2019 Sravan Babu Bodapati, Spandana Gella, Kasturi Bhattacharjee, Yaser Al-Onaizan

User generated text on social media often suffers from a lot of undesired characteristics including hatespeech, abusive language, insults etc.

Abusive Language

Span-Level Model for Relation Extraction

no code implementations ACL 2019 Kalpit Dixit, Yaser Al-Onaizan

Relation Extraction is the task of identifying entity mention spans in raw text and then identifying relations between pairs of the entity mentions.

 Ranked #1 on Relation Extraction on ACE 2005 (Sentence Encoder metric)

Relation Relation Extraction

AMR Parsing using Stack-LSTMs

no code implementations EMNLP 2017 Miguel Ballesteros, Yaser Al-Onaizan

We present a transition-based AMR parser that directly generates AMR parses from plain text.

AMR Parsing POS

Attention-based Vocabulary Selection for NMT Decoding

no code implementations12 Jun 2017 Baskaran Sankaran, Markus Freitag, Yaser Al-Onaizan

Usually, the candidate lists are a combination of external word-to-word aligner, phrase table entries or most frequent words.

Machine Translation NMT +2

Ensemble Distillation for Neural Machine Translation

no code implementations6 Feb 2017 Markus Freitag, Yaser Al-Onaizan, Baskaran Sankaran

Knowledge distillation describes a method for training a student network to perform better by learning from a stronger teacher network.

Knowledge Distillation Machine Translation +3

Beam Search Strategies for Neural Machine Translation

1 code implementation WS 2017 Markus Freitag, Yaser Al-Onaizan

In this paper, we concentrate on speeding up the decoder by applying a more flexible beam search strategy whose candidate size may vary at each time step depending on the candidate scores.

Machine Translation NMT +2

Fast Domain Adaptation for Neural Machine Translation

no code implementations20 Dec 2016 Markus Freitag, Yaser Al-Onaizan

The basic concept in NMT is to train a large Neural Network that maximizes the translation performance on a given parallel corpus.

Domain Adaptation Machine Translation +2

Temporal Attention Model for Neural Machine Translation

no code implementations9 Aug 2016 Baskaran Sankaran, Haitao Mi, Yaser Al-Onaizan, Abe Ittycheriah

Attention-based Neural Machine Translation (NMT) models suffer from attention deficiency issues as has been observed in recent research.

Machine Translation NMT +2

Zero-Resource Translation with Multi-Lingual Neural Machine Translation

no code implementations EMNLP 2016 Orhan Firat, Baskaran Sankaran, Yaser Al-Onaizan, Fatos T. Yarman Vural, Kyunghyun Cho

In this paper, we propose a novel finetuning algorithm for the recently introduced multi-way, mulitlingual neural machine translate that enables zero-resource machine translation.

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.