Our attention then turns to the cross-topic aspect of this work, and the specificity of topics in terms of vocabulary and socio-cultural context.
Natural Language Processing (NLP) is defined by specific, separate tasks, with each their own literature, benchmark datasets, and definitions.
In this position paper, we present a research agenda and ideas for facilitating exposure to diverse viewpoints in news recommendation.
We provide hypotheses on which properties are reflected in distributional data or not based on the type of relation.
Determining how words have changed their meaning is an important topic in Natural Language Processing.
Cross-topic stance detection is the task to automatically detect stances (pro, against, or neutral) on unseen topics.
We investigate the possibilities and limitations of using distributional semantic models for analyzing philosophical data by means of a realistic use-case.
We establish an additional, agreement-independent quality metric based on answer-coherence and evaluate it in comparison to existing metrics.
In this article, we lay out the basic ideas and principles of the project Framing Situations in the Dutch Language.
The user can apply two types of annotations: 1) mappings from expressions to frames and frame elements, 2) reference relations from mentions to events and participants of the structured data.
Specifically, we inspect the behaviour of models using a pre-trained background space in learning.
Studying conceptual change using embedding models has become increasingly popular in the Digital Humanities community while critical observations about them have received less attention.
The idea behind this method is that properties identified by classifiers, but not through full vector comparison are captured by embeddings.
This paper presents the two systems submitted by the meaning space team in Task 10 of the SemEval competition 2018 entitled Capturing discriminative attributes.
This paper describes BiographyNet, a digital humanities project (2012-2016) that brings together researchers from history, computational linguistics and computer science.
Complexity of event data in texts makes it difficult to assess its content, especially when considering larger collections in which different sources report on the same or similar situations.
When people or organizations provide information, they make choices regarding what information they include and how they present it.
This paper presents two alternative NLP architectures to analyze massive amounts of documents, using parallel processing.
In the last decade, different aspects of linguistic encoding of perspectives have been targeted as separated phenomena through different annotation initiatives.
Both sentiment and event factuality are fundamental information levels for our understanding of events mentioned in news texts.
When NLP is used to support research in the humanities, new methodological issues come into play.