WS 2019

A Hybrid Neural Network Model for Commonsense Reasoning

WS 2019 namisan/mt-dnn

An HNN consists of two component models, a masked language model and a semantic similarity model, which share a BERT-based contextual encoder but use different model-specific input and output layers.

LANGUAGE MODELLING SEMANTIC SIMILARITY SEMANTIC TEXTUAL SIMILARITY

MRQA 2019 Shared Task: Evaluating Generalization in Reading Comprehension

WS 2019 mrqa/MRQA-Shared-Task-2019

We present the results of the Machine Reading for Question Answering (MRQA) 2019 shared task on evaluating the generalization capabilities of reading comprehension systems.

MULTI-TASK LEARNING QUESTION ANSWERING READING COMPREHENSION

Exploiting BERT for End-to-End Aspect-based Sentiment Analysis

WS 2019 lixin4ever/BERT-E2E-ABSA

In this paper, we investigate the modeling power of contextualized embeddings from pre-trained language models, e. g. BERT, on the E2E-ABSA task.

ASPECT-BASED SENTIMENT ANALYSIS MODEL SELECTION

Domain-agnostic Question-Answering with Adversarial Training

WS 2019 seanie12/mrqa

Adapting models to new domain without finetuning is a challenging problem in deep learning.

DOMAIN GENERALIZATION QUESTION ANSWERING

FlowDelta: Modeling Flow Information Gain in Reasoning for Conversational Machine Comprehension

WS 2019 MiuLab/FlowDelta

Conversational machine comprehension requires deep understanding of the dialogue flow, and the prior work proposed FlowQA to implicitly model the context representations in reasoning for better understanding.

READING COMPREHENSION

Auto-Sizing the Transformer Network: Improving Speed, Efficiency, and Performance for Low-Resource Machine Translation

WS 2019 KentonMurray/ProxGradPytorch

Neural sequence-to-sequence models, particularly the Transformer, are the state of the art in machine translation.

MACHINE TRANSLATION

BillSum: A Corpus for Automatic Summarization of US Legislation

WS 2019 FiscalNote/BillSum

Automatic summarization methods have been studied on a variety of domains, including news and scientific articles.

Unlearn Dataset Bias in Natural Language Inference by Fitting the Residual

WS 2019 hhexiy/debiased

We first learn a biased model that only uses features that are known to relate to dataset bias.

NATURAL LANGUAGE INFERENCE

Identifying Nuances in Fake News vs. Satire: Using Semantic and Linguistic Cues

WS 2019 adverifai/Satire_vs_Fake

As avenues for future work, we consider studying additional linguistic features related to the humor aspect, and enriching the data with current news events, to help identify a political or social message.

LANGUAGE MODELLING

Do Sentence Interactions Matter? Leveraging Sentence Level Representations for Fake News Classification

WS 2019 MysteryVaibhav/fake_news_semantics

The rising growth of fake news and misleading information through online media outlets demands an automatic method for detecting such news articles.

FEATURE ENGINEERING