WS 2017

Readers vs. Writers vs. Texts: Coping with Different Perspectives of Text Understanding in Emotion Annotation

WS 2017 JULIELab/EmoBank

We here examine how different perspectives of understanding written discourse, like the reader{'}s, the writer{'}s or the text{'}s point of view, affect the quality of emotion annotations.

READING COMPREHENSION

A Consolidated Open Knowledge Representation for Multiple Texts

WS 2017 vered1986/OKR

We propose to move from Open Information Extraction (OIE) ahead to Open Knowledge Representation (OKR), aiming to represent information conveyed jointly in a set of texts in an open text-based manner.

OPEN INFORMATION EXTRACTION

Character-based Neural Embeddings for Tweet Clustering

WS 2017 vendi12/tweet2vec_clustering

In this paper we show how the performance of tweet clustering can be improved by leveraging character-based neural networks.

Finding Good Conversations Online: The Yahoo News Annotated Comments Corpus

WS 2017 cnap/ynacc

This work presents a dataset and annotation scheme for the new task of identifying {``}good{''} conversations that occur online, which we call ERICs: Engaging, Respectful, and/or Informative Conversations.

The BECauSE Corpus 2.0: Annotating Causality and Overlapping Relations

WS 2017 duncanka/BECauSE

Language of cause and effect captures an essential component of the semantics of a text.

DECISION MAKING

LSDSem 2017: Exploring Data Generation Methods for the Story Cloze Test

WS 2017 UKPLab/lsdsem2017-story-cloze

The Story Cloze test is a recent effort in providing a common test scenario for text understanding systems.

Semi-Automated Resolution of Inconsistency for a Harmonized Multiword Expression and Dependency Parse Annotation

WS 2017 eltimster/HAMSTER

This paper presents a methodology for identifying and resolving various kinds of inconsistency in the context of merging dependency and multiword expression (MWE) annotations, to generate a dependency treebank with comprehensive MWE annotations.

Social Bias in Elicited Natural Language Inferences

WS 2017 cjmay/snli-ethics

We analyze the Stanford Natural Language Inference (SNLI) corpus in an investigation of bias and stereotyping in NLP data.

LANGUAGE MODELLING NATURAL LANGUAGE INFERENCE WORD EMBEDDINGS