Search Results for author: Ines Rehbein

Found 30 papers, 3 papers with code

Come hither or go away? Recognising pre-electoral coalition signals in the news

no code implementations EMNLP 2021 Ines Rehbein, Simone Paolo Ponzetto, Anna Adendorf, Oke Bahnsen, Lukas Stoetzer, Heiner Stuckenschmidt

In this paper, we introduce the task of political coalition signal prediction from text, that is, the task of recognizing from the news coverage leading up to an election the (un)willingness of political parties to form a government coalition.

Multi-Task Learning

I’ve got a construction looks funny – representing and recovering non-standard constructions in UD

no code implementations UDW (COLING) 2020 Josef Ruppenhofer, Ines Rehbein

We argue that a unified treatment of constructions across languages will increase the consistency of the UD annotations and thus the quality of the treebanks for linguistic analysis.

Multilingual NLP

Parsers Know Best: German PP Attachment Revisited

no code implementations COLING 2020 Bich-Ngoc Do, Ines Rehbein

In particular, we show that using gold information for the extraction of attachment candidates as well as a missing comparison of the system{'}s output to the output of a full syntactic parser leads to an overly optimistic assessment of the results.

Treebanking User-Generated Content: a UD Based Overview of Guidelines, Corpora and Unified Recommendations

no code implementations3 Nov 2020 Manuela Sanguinetti, Lauren Cassidy, Cristina Bosco, Özlem Çetinoğlu, Alessandra Teresa Cignarella, Teresa Lynn, Ines Rehbein, Josef Ruppenhofer, Djamé Seddah, Amir Zeldes

This article presents a discussion on the main linguistic phenomena which cause difficulties in the analysis of user-generated texts found on the web and in social media, and proposes a set of annotation guidelines for their treatment within the Universal Dependencies (UD) framework of syntactic analysis.

Neural Reranking for Dependency Parsing: An Evaluation

no code implementations ACL 2020 Bich-Ngoc Do, Ines Rehbein

We show that the GCN not only outperforms previous models on English but is the only model that is able to improve results over the baselines on German and Czech.

Dependency Parsing

Treebanking User-Generated Content: A Proposal for a Unified Representation in Universal Dependencies

no code implementations LREC 2020 Manuela Sanguinetti, Cristina Bosco, Lauren Cassidy, {\"O}zlem {\c{C}}etino{\u{g}}lu, Aless Cignarella, ra Teresa, Teresa Lynn, Ines Rehbein, Josef Ruppenhofer, Djam{\'e} Seddah, Amir Zeldes

The paper presents a discussion on the main linguistic phenomena of user-generated texts found in web and social media, and proposes a set of annotation guidelines for their treatment within the Universal Dependencies (UD) framework.

Fine-grained Named Entity Annotations for German Biographic Interviews

no code implementations LREC 2020 Josef Ruppenhofer, Ines Rehbein, Carolina Flinz

Building on the OntoNotes 5. 0 NER inventory, our scheme is adapted for a corpus of transcripts of biographic interviews by adding categories for AGE and LAN(guage) and also features extended numeric and temporal categories.


Improving Sentence Boundary Detection for Spoken Language Transcripts

no code implementations LREC 2020 Ines Rehbein, Josef Ruppenhofer, Thomas Schmidt

For the detection of boundaries in spoken language transcripts, we achieve a substantial improvement when framing the boundary detection problem assentence pair classification task, as compared to a sequence tagging approach.

Boundary Detection Transfer Learning

A New Resource for German Causal Language

no code implementations LREC 2020 Ines Rehbein, Josef Ruppenhofer

In the paper, we present inter-annotator agreement scores for our dataset and discuss problems for annotating causal language.


Active Learning via Membership Query Synthesis for Semi-Supervised Sentence Classification

1 code implementation CONLL 2019 Raphael Schumann, Ines Rehbein

Active learning (AL) is a technique for reducing manual annotation effort during the annotation of training data for machine learning classifiers.

Active Learning General Classification +1

On the role of discourse relations in persuasive texts

no code implementations WS 2019 Ines Rehbein

We present a corpus study where we control for speaker and topic and show that the distribution of different discourse connectives varies considerably across different discourse settings.

Automatic Alignment and Annotation Projection for Literary Texts

no code implementations WS 2019 Uli Steinbach, Ines Rehbein

This paper presents a modular NLP pipeline for the creation of a parallel literature corpus, followed by annotation transfer from the source to the target language.

Political Text Scaling Meets Computational Semantics

1 code implementation12 Apr 2019 Federico Nanni, Goran Glavas, Ines Rehbein, Simone Paolo Ponzetto, Heiner Stuckenschmidt

During the last fifteen years, automatic text scaling has become one of the key tools of the Text as Data community in political science.

Sprucing up the trees -- Error detection in treebanks

no code implementations COLING 2018 Ines Rehbein, Josef Ruppenhofer

We present a method for detecting annotation errors in manually and automatically annotated dependency parse trees, based on ensemble parsing in combination with Bayesian inference, guided by active learning.

Active Learning Bayesian Inference +2

What do we need to know about an unknown word when parsing German

no code implementations WS 2017 Bich-Ngoc Do, Ines Rehbein, Anette Frank

We propose a new type of subword embedding designed to provide more information about unknown compounds, a major source for OOV words in German.

Language Modelling POS +1

Authorship Attribution with Convolutional Neural Networks and POS-Eliding

no code implementations WS 2017 Julian Hitschler, Esther van den Berg, Ines Rehbein

We use a convolutional neural network to perform authorship identification on a very homogeneous dataset of scientific publications.


Evaluating LSTM models for grammatical function labelling

no code implementations WS 2017 Bich-Ngoc Do, Ines Rehbein

To improve grammatical function labelling for German, we augment the labelling component of a neural dependency parser with a decision history.

Dependency Parsing

Detecting annotation noise in automatically labelled data

no code implementations ACL 2017 Ines Rehbein, Josef Ruppenhofer

We introduce a method for error detection in automatically annotated text, aimed at supporting the creation of high-quality language resources at affordable cost.

Active Learning Domain Adaptation +2

Catching the Common Cause: Extraction and Annotation of Causal Relations and their Participants

no code implementations WS 2017 Ines Rehbein, Josef Ruppenhofer

In this paper, we present a simple, yet effective method for the automatic identification and extraction of causal relations from text, based on a large English-German parallel corpus.

Language Modelling

Annotating Discourse Relations in Spoken Language: A Comparison of the PDTB and CCR Frameworks

no code implementations LREC 2016 Ines Rehbein, Merel Scholman, Vera Demberg

In discourse relation annotation, there is currently a variety of different frameworks being used, and most of them have been developed and employed mostly on written data.

The KiezDeutsch Korpus (KiDKo) Release 1.0

no code implementations LREC 2014 Ines Rehbein, S{\"o}ren Schalowski, Heike Wiese

This paper presents the first release of the KiezDeutsch Korpus (KiDKo), a new language resource with multiparty spoken dialogues of Kiezdeutsch, a newly emerging language variety spoken by adolescents from multiethnic urban areas in Germany.


Yes we can!? Annotating English modal verbs

no code implementations LREC 2012 Josef Ruppenhofer, Ines Rehbein

This paper presents an annotation scheme for English modal verbs together with sense-annotated data from the news domain.

Sentiment Analysis Subjectivity Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.