1 code implementation • NAACL (DeeLIO) 2021 • Maria Becker, Siting Liang, Anette Frank
In this work we propose an approach for generating statements that explicate implicit knowledge connecting sentences in text.
no code implementations • NAACL (ALVR) 2021 • Julia Suter, Letitia Parcalabescu, Anette Frank
Phrase grounding (PG) is a multimodal task that grounds language in images.
1 code implementation • EMNLP (ArgMining) 2021 • Juri Opitz, Philipp Heinisch, Philipp Wiesenbach, Philipp Cimiano, Anette Frank
When assessing the similarity of arguments, researchers typically use approaches that do not provide interpretable evidence or justifications for their ratings.
1 code implementation • ArgMining (ACL) 2022 • Philipp Heinisch, Anette Frank, Juri Opitz, Moritz Plenz, Philipp Cimiano
This paper provides an overview of the Argument Validity and Novelty Prediction Shared Task that was organized as part of the 9th Workshop on Argument Mining (ArgMining 2022).
Ranked #1 on
ValNov
on ValNov Subtask A
no code implementations • ArgMining (ACL) 2022 • Philipp Heinisch, Moritz Plenz, Juri Opitz, Anette Frank, Philipp Cimiano
Using only training data retrieved from related datasets by automatically labeling them for validity and novelty, combined with synthetic data, outperforms the baseline by 11. 5 points in F_1-score.
no code implementations • *SEM (NAACL) 2022 • Laura Zeidler, Juri Opitz, Anette Frank
Evaluating the quality of generated text is difficult, since traditional NLG evaluation metrics, focusing more on surface form than meaning, often fail to assign appropriate scores. This is especially problematic for AMR-to-text evaluation, given the abstract nature of AMR. Our work aims to support the development and improvement of NLG evaluation metrics that focus on meaning by developing a dynamic CheckList for NLG metrics that is interpreted by being organized around meaning-relevant linguistic phenomena.
no code implementations • LILT 2016 • Ana Marasović, Mengfei Zhou, Alexis Palmer, Anette Frank
Modal verbs have different interpretations depending on their context.
no code implementations • 14 Sep 2023 • Xiyan Fu, Anette Frank
Hence, we propose a dynamic modularized reasoning model, MORSE, to improve the compositional generalization of neural models.
1 code implementation • 23 Aug 2023 • Frederick Riemenschneider, Anette Frank
In this study, we introduce SPhilBERTa, a trilingual Sentence-RoBERTa model tailored for Classical Philology, which excels at cross-lingual semantic comprehension and identification of identical sentences across Ancient Greek, Latin, and English.
1 code implementation • 1 Jun 2023 • Juri Opitz, Shira Wein, Julius Steen, Anette Frank, Nathan Schneider
The task of natural language inference (NLI) asks whether a given premise (expressed in NL) entails a given NL hypothesis.
1 code implementation • 26 May 2023 • Julius Steen, Juri Opitz, Anette Frank, Katja Markert
Conditional language models still generate unfaithful output that is not supported by their input.
no code implementations • 24 May 2023 • Xiyan Fu, Anette Frank
We propose SETI (Systematicity Evaluation of Textual Inference), a novel and comprehensive benchmark designed for evaluating pre-trained language models (PLMs) for their systematicity capabilities in the domain of textual inference.
1 code implementation • 23 May 2023 • Frederick Riemenschneider, Anette Frank
While prior work on Classical languages unanimously uses BERT, in this work we create four language models for Ancient Greek that vary along two dimensions to study their versatility for tasks of interest for Classical languages: we explore (i) encoder-only and encoder-decoder architectures using RoBERTa and T5 as strong model types, and create for each of them (ii) a monolingual Ancient Greek and a multilingual instance that includes Latin and English.
1 code implementation • 15 May 2023 • Moritz Plenz, Juri Opitz, Philipp Heinisch, Philipp Cimiano, Anette Frank
Arguments often do not make explicit how a conclusion follows from its premises.
1 code implementation • 15 Dec 2022 • Letitia Parcalabescu, Anette Frank
We apply MM-SHAP in two ways: (1) to compare models for their average degree of multimodality, and (2) to measure for individual models the contribution of individual modalities for different tasks and datasets.
1 code implementation • 12 Oct 2022 • Juri Opitz, Anette Frank
Recently, astonishing advances have been observed in AMR parsing, as measured by the structural Smatch metric.
1 code implementation • 14 Jun 2022 • Juri Opitz, Anette Frank
Models based on large-pretrained language models, such as S(entence)BERT, provide effective and efficient sentence embeddings that show high correlation to human similarity ratings, but lack interpretability.
no code implementations • 24 May 2022 • Laura Zeidler, Juri Opitz, Anette Frank
Our work aims to support the development and improvement of NLG evaluation metrics that focus on meaning, by developing a dynamic CheckList for NLG metrics that is interpreted by being organized around meaning-relevant linguistic phenomena.
1 code implementation • 24 Mar 2022 • Juri Opitz, Philipp Meier, Anette Frank
The similarity of graph structures, such as Meaning Representations (MRs), is often assessed via structural matching algorithms, such as Smatch (Cai and Knight, 2013).
1 code implementation • ACL 2022 • Letitia Parcalabescu, Michele Cafagna, Lilitta Muradjan, Anette Frank, Iacer Calixto, Albert Gatt
We propose VALSE (Vision And Language Structured Evaluation), a novel benchmark designed for testing general-purpose pretrained vision and language (V&L) models for their visio-linguistic grounding capabilities on specific linguistic phenomena.
Ranked #1 on
image-sentence alignment
on VALSE
1 code implementation • 9 Dec 2021 • Constantin Eichenberg, Sidney Black, Samuel Weinbach, Letitia Parcalabescu, Anette Frank
Large-scale pretraining is fast becoming the norm in Vision-Language (VL) modeling.
no code implementations • 26 Aug 2021 • Juri Opitz, Angel Daza, Anette Frank
In this work we propose new Weisfeiler-Leman AMR similarity metrics that unify the strengths of previous metrics, while mitigating their weaknesses.
1 code implementation • ACL (IWPT) 2021 • Sarah Uhrig, Yoalli Rezepka Garcia, Juri Opitz, Anette Frank
In cross-lingual Abstract Meaning Representation (AMR) parsing, researchers develop models that project sentences from various languages onto their AMRs to capture their essential semantic structures: given a sentence in any language, we aim to capture its core semantic content through concepts connected by manifold types of semantic relations.
1 code implementation • Joint Conference on Lexical and Computational Semantics 2021 • Debjit Paul, Anette Frank
This work offers the first study of how such knowledge impacts the Abductive NLI task -- which consists in choosing the more likely explanation for given observations.
1 code implementation • ACL 2021 • Debjit Paul, Anette Frank
Despite recent successes of large pre-trained language models in solving reasoning tasks, their inference capabilities remain opaque.
1 code implementation • IWCS (ACL) 2021 • Maria Becker, Katharina Korfhage, Debjit Paul, Anette Frank
We conduct evaluations on two argumentative datasets and show that a combination of the two model types generates meaningful, high-quality knowledge paths between sentences that reveal implicit knowledge conveyed in text.
no code implementations • EACL 2021 • Maria Becker, Katharina Korfhage, Anette Frank
COCO-EX extracts meaningful concepts from natural language texts and maps them to conjunct concept nodes in ConceptNet, utilizing the maximum of relational information stored in the ConceptNet knowledge graph.
no code implementations • ACL (mmsr, IWCS) 2021 • Letitia Parcalabescu, Nils Trost, Anette Frank
The last years have shown rapid developments in the field of multimodal machine learning, combining e. g., vision, text or speech.
no code implementations • ACL (mmsr, IWCS) 2021 • Letitia Parcalabescu, Albert Gatt, Anette Frank, Iacer Calixto
We investigate the reasoning ability of pretrained vision and language (V&L) models in two tasks that require multimodal integration: (1) discriminating a correct image-sentence pair from an incorrect one, and (2) counting entities in an image.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Debjit Paul, Anette Frank
Notably we are, to the best of our knowledge, the first to demonstrate that a model that learns to perform counterfactual reasoning helps predicting the best explanation in an abductive reasoning task.
1 code implementation • EMNLP 2020 • Angel Daza, Anette Frank
Even though SRL is researched for many languages, major improvements have mostly been obtained for English, for which more resources are available.
1 code implementation • EACL 2021 • Juri Opitz, Anette Frank
Systems that generate natural language text from abstract meaning representations such as AMR are typically evaluated using automatic surface matching metrics that compare the generated texts to reference texts from which the input meaning representations were constructed.
3 code implementations • 29 Jan 2020 • Juri Opitz, Letitia Parcalabescu, Anette Frank
Different metrics have been proposed to compare Abstract Meaning Representation (AMR) graphs.
no code implementations • LREC 2020 • Maria Becker, Katharina Korfhage, Anette Frank
When speaking or writing, people omit information that seems clear and evident, such that only part of the message is expressed in words.
1 code implementation • IJCNLP 2019 • Angel Daza, Anette Frank
Finally, we measure the effectiveness of our method by using the generated data to augment the training basis for resource-poor languages and perform manual evaluation to show that it produces high-quality sentences and assigns accurate semantic role annotations.
1 code implementation • IJCNLP 2019 • Todor Mihaylov, Anette Frank
In this work, we propose to use linguistic annotations as a basis for a \textit{Discourse-Aware Semantic Self-Attention} encoder that we employ for reading comprehension on long narrative texts.
no code implementations • WS 2019 • Juri Opitz, Anette Frank
When assessing relations between argumentative units (e. g., support or attack), computational systems often exploit disclosing indicators or markers that are not part of elementary argumentative units (EAUs) themselves, but are gained from their context (position in paragraph, preceding tokens, etc.).
no code implementations • WS 2019 • Maria Becker, Michael Staniek, Vivi Nastase, Anette Frank
Commonsense knowledge relations are crucial for advanced NLU tasks.
no code implementations • SEMEVAL 2019 • Juri Opitz, Anette Frank
Secondly, we perform parse selection based on predicted parse accuracies of candidate parses from alternative systems, with the aim of improving overall results.
1 code implementation • NAACL 2019 • Debjit Paul, Anette Frank
To make machines better understand sentiments, research needs to move from polarity identification to understanding the reasons that underlie the expression of sentiment.
no code implementations • SEMEVAL 2019 • Juri Opitz, Anette Frank
Semantic proto-role labeling (SPRL) is an alternative to semantic role labeling (SRL) that moves beyond a categorical definition of roles, following Dowty's feature-based view of proto-roles.
no code implementations • COLING 2018 • Juri Opitz, Anette Frank
The Winograd Schema Challenge targets pronominal anaphora resolution problems which require the application of cognitive inference in combination with world knowledge.
no code implementations • WS 2018 • Angel Daza, Anette Frank
We explore a novel approach for Semantic Role Labeling (SRL) by casting it as a sequence-to-sequence process.
no code implementations • ACL 2018 • Todor Mihaylov, Anette Frank
We introduce a neural reading comprehension model that integrates external commonsense knowledge, encoded as a key-value memory, in a cloze-style setting.
no code implementations • 10 Nov 2017 • Todor Mihaylov, Zornitsa Kozareva, Anette Frank
Reading comprehension is a challenging task in natural language processing and requires a set of skills to be solved.
1 code implementation • NAACL 2018 • Ana Marasović, Anette Frank
For over a decade, machine learning has been used to extract opinion-holder-target structures from text to answer the question "Who expressed what kind of sentiment towards what?".
Ranked #2 on
Fine-Grained Opinion Analysis
on MPQA
(using extra training data)
no code implementations • WS 2017 • Bich-Ngoc Do, Ines Rehbein, Anette Frank
We propose a new type of subword embedding designed to provide more information about unknown compounds, a major source for OOV words in German.
no code implementations • SEMEVAL 2017 • Maria Becker, Michael Staniek, Vivi Nastase, Alexis Palmer, Anette Frank
Detecting aspectual properties of clauses in the form of situation entity types has been shown to depend on a combination of syntactic-semantic and contextual features.
1 code implementation • EMNLP 2017 • Ana Marasović, Leo Born, Juri Opitz, Anette Frank
We found model variants that outperform the baselines for nominal anaphors, without training on individual anaphor data, but still lag behind for pronominal anaphors.
Ranked #1 on
Abstract Anaphora Resolution
on The ARRAU Corpus
no code implementations • WS 2017 • Silvana Hartmann, {\'E}va M{\'u}jdricza-Maydt, Ilia Kuznetsov, Iryna Gurevych, Anette Frank
We present the first experiment-based study that explicitly contrasts the three major semantic role labeling frameworks.
no code implementations • WS 2017 • Todor Mihaylov, Anette Frank
This paper describes two supervised baseline systems for the Story Cloze Test Shared Task (Mostafazadeh et al., 2016a).
no code implementations • WS 2016 • Richard Eckart de Castilho, {\'E}va M{\'u}jdricza-Maydt, Seid Muhie Yimam, Silvana Hartmann, Iryna Gurevych, Anette Frank, Chris Biemann
We introduce the third major release of WebAnno, a generic web-based annotation tool for distributed teams.
no code implementations • WS 2016 • Ana Marasović, Anette Frank
Modal sense classification (MSC) is a special WSD task that depends on the meaning of the proposition in the modal's scope.
no code implementations • LREC 2016 • {\'E}va M{\'u}jdricza-Maydt, Silvana Hartmann, Iryna Gurevych, Anette Frank
We present a VerbNet-based annotation scheme for semantic roles that we explore in an annotation study on German language data that combines word sense and semantic role annotation.