An HNN consists of two component models, a masked language model and a semantic similarity model, which share a BERT-based contextual encoder but use different model-specific input and output layers.
We present the results of the Machine Reading for Question Answering (MRQA) 2019 shared task on evaluating the generalization capabilities of reading comprehension systems.
In this paper, we investigate the modeling power of contextualized embeddings from pre-trained language models, e. g. BERT, on the E2E-ABSA task.
Adapting models to new domain without finetuning is a challenging problem in deep learning.
Conversational machine comprehension requires deep understanding of the dialogue flow, and the prior work proposed FlowQA to implicitly model the context representations in reasoning for better understanding.
Neural sequence-to-sequence models, particularly the Transformer, are the state of the art in machine translation.
Automatic summarization methods have been studied on a variety of domains, including news and scientific articles.
We first learn a biased model that only uses features that are known to relate to dataset bias.
As avenues for future work, we consider studying additional linguistic features related to the humor aspect, and enriching the data with current news events, to help identify a political or social message.
The rising growth of fake news and misleading information through online media outlets demands an automatic method for detecting such news articles.