1 code implementation • WS 2017 • Michael Bugert, Yevgeniy Puzikov, Andreas R{\"u}ckl{\'e}, Judith Eckle-Kohler, Teresa Martin, Eugenio Mart{\'\i}nez-C{\'a}mara, Daniil Sorokin, Maxime Peyrard, Iryna Gurevych
The Story Cloze test is a recent effort in providing a common test scenario for text understanding systems.
1 code implementation • COLING 2016 • Maxime Peyrard, Judith Eckle-Kohler
Extracting summaries via integer linear programming and submodularity are popular and successful techniques in extractive multi-document summarization.
1 code implementation • COLING 2016 • Christian M. Meyer, Judith Eckle-Kohler, Iryna Gurevych
We introduce the task of detecting cross-lingual marketing blunders, which occur if a trade name resembles an inappropriate or negatively connotated word in a target language.
1 code implementation • ACL 2017 • Maxime Peyrard, Judith Eckle-Kohler
We present a new framework for evaluating extractive summarizers, which is based on a principled representation as optimization problem.
1 code implementation • COLING 2016 • Markus Zopf, Maxime Peyrard, Judith Eckle-Kohler
In a detailed analysis, we show that our new corpus is significantly different from the homogeneous corpora commonly used, and that it is heterogeneous along several dimensions.
no code implementations • 11 Sep 2015 • Judith Eckle-Kohler
We revisit Levin's theory about the correspondence of verb meaning and syntax and infer semantic classes from a large syntactic classification of more than 600 German verbs taking clausal and non-finite arguments.
no code implementations • TACL 2016 • Silvana Hartmann, Judith Eckle-Kohler, Iryna Gurevych
We present a new approach for generating role-labeled training data using Linked Lexical Resources, i. e., integrated lexical resources that combine several resources (e. g., Word-Net, FrameNet, Wiktionary) by linking them on the sense or on the role level.
no code implementations • ACL 2017 • Maxime Peyrard, Judith Eckle-Kohler
We present a new supervised framework that learns to estimate automatic Pyramid scores and uses them for optimization-based extractive multi-document summarization.
no code implementations • ACL 2017 • Gabriel Stanovsky, Judith Eckle-Kohler, Yevgeniy Puzikov, Ido Dagan, Iryna Gurevych
Previous models for the assessment of commitment towards a predicate in a sentence (also known as factuality prediction) were trained and tested against a specific annotated dataset, subsequently limiting the generality of their results.
no code implementations • EACL 2017 • Sallam Abualhaija, Tristan Miller, Judith Eckle-Kohler, Iryna Gurevych, Karl-Heinz Zimmermann
In this paper, we propose using metaheuristics{---}in particular, simulated annealing and the new D-Bees algorithm{---}to solve word sense disambiguation as an optimization problem within a knowledge-based lexical substitution system.
no code implementations • COLING 2016 • Omer Levy, Ido Dagan, Gabriel Stanovsky, Judith Eckle-Kohler, Iryna Gurevych
Sentence intersection captures the semantic overlap of two texts, generalizing over paradigms such as textual entailment and semantic text similarity.
Abstractive Text Summarization Natural Language Inference +2
no code implementations • LREC 2014 • Kostadin Cholakov, Chris Biemann, Judith Eckle-Kohler, Iryna Gurevych
This article describes a lexical substitution dataset for German.
no code implementations • LREC 2012 • Judith Eckle-Kohler, Iryna Gurevych, Silvana Hartmann, Michael Matuschek, Christian M. Meyer
We present UBY-LMF, an LMF-based model for large-scale, heterogeneous multilingual lexical-semantic resources (LSRs).
no code implementations • LREC 2012 • Christian Chiarcos, Sebastian Hellmann, Sebastian Nordhoff, Steven Moran, Richard Littauer, Judith Eckle-Kohler, Iryna Gurevych, Silvana Hartmann, Michael Matuschek, Christian M. Meyer
This paper describes the Open Linguistics Working Group (OWLG) of the Open Knowledge Foundation (OKFN).
no code implementations • LREC 2016 • Maria Sukhareva, Judith Eckle-Kohler, Ivan Habernal, Iryna Gurevych
We present a new large dataset of 12403 context-sensitive verb relations manually annotated via crowdsourcing.