1 code implementation • EMNLP (ArgMining) 2021 • Juri Opitz, Philipp Heinisch, Philipp Wiesenbach, Philipp Cimiano, Anette Frank
When assessing the similarity of arguments, researchers typically use approaches that do not provide interpretable evidence or justifications for their ratings.
1 code implementation • ArgMining (ACL) 2022 • Philipp Heinisch, Anette Frank, Juri Opitz, Moritz Plenz, Philipp Cimiano
This paper provides an overview of the Argument Validity and Novelty Prediction Shared Task that was organized as part of the 9th Workshop on Argument Mining (ArgMining 2022).
Ranked #1 on
ValNov
on ValNov Subtask A
no code implementations • ArgMining (ACL) 2022 • Philipp Heinisch, Moritz Plenz, Juri Opitz, Anette Frank, Philipp Cimiano
Using only training data retrieved from related datasets by automatically labeling them for validity and novelty, combined with synthetic data, outperforms the baseline by 11. 5 points in F_1-score.
no code implementations • *SEM (NAACL) 2022 • Laura Zeidler, Juri Opitz, Anette Frank
Evaluating the quality of generated text is difficult, since traditional NLG evaluation metrics, focusing more on surface form than meaning, often fail to assign appropriate scores. This is especially problematic for AMR-to-text evaluation, given the abstract nature of AMR. Our work aims to support the development and improvement of NLG evaluation metrics that focus on meaning by developing a dynamic CheckList for NLG metrics that is interpreted by being organized around meaning-relevant linguistic phenomena.
1 code implementation • 12 Oct 2022 • Juri Opitz, Anette Frank
Recently, astonishing advances have been observed in AMR parsing, as measured by the structural Smatch metric.
1 code implementation • 14 Jun 2022 • Juri Opitz, Anette Frank
Models based on large-pretrained language models, such as S(entence)BERT, provide effective and efficient sentence embeddings that show high correlation to human similarity ratings, but lack interpretability.
no code implementations • 24 May 2022 • Laura Zeidler, Juri Opitz, Anette Frank
Our work aims to support the development and improvement of NLG evaluation metrics that focus on meaning, by developing a dynamic CheckList for NLG metrics that is interpreted by being organized around meaning-relevant linguistic phenomena.
no code implementations • 24 Mar 2022 • Juri Opitz, Philipp Meier, Anette Frank
The semantic similarity of graph-based meaning representations, such as Abstract Meaning Representation (AMR), is typically assessed using graph matching algorithms, such as SMATCH (Cai and Knight, 2013).
no code implementations • 26 Aug 2021 • Juri Opitz, Angel Daza, Anette Frank
In this work we propose new Weisfeiler-Leman AMR similarity metrics that unify the strengths of previous metrics, while mitigating their weaknesses.
1 code implementation • ACL (IWPT) 2021 • Sarah Uhrig, Yoalli Rezepka Garcia, Juri Opitz, Anette Frank
In cross-lingual Abstract Meaning Representation (AMR) parsing, researchers develop models that project sentences from various languages onto their AMRs to capture their essential semantic structures: given a sentence in any language, we aim to capture its core semantic content through concepts connected by manifold types of semantic relations.
1 code implementation • EACL 2021 • Juri Opitz, Anette Frank
Systems that generate natural language text from abstract meaning representations such as AMR are typically evaluated using automatic surface matching metrics that compare the generated texts to reference texts from which the input meaning representations were constructed.
1 code implementation • Asian Chapter of the Association for Computational Linguistics 2020 • Juri Opitz
Structured semantic sentence representations such as Abstract Meaning Representations (AMRs) are potentially useful in various NLP tasks.
3 code implementations • 29 Jan 2020 • Juri Opitz, Letitia Parcalabescu, Anette Frank
Different metrics have been proposed to compare Abstract Meaning Representation (AMR) graphs.
1 code implementation • 8 Nov 2019 • Juri Opitz, Sebastian Burst
In this note, we show that only under rare circumstances the two computations can be considered equivalent.
no code implementations • 19 Sep 2019 • Juri Opitz
We formulate argumentative relation classification (support vs. attack) as a text-plausibility ranking task.
no code implementations • WS 2019 • Juri Opitz, Anette Frank
When assessing relations between argumentative units (e. g., support or attack), computational systems often exploit disclosing indicators or markers that are not part of elementary argumentative units (EAUs) themselves, but are gained from their context (position in paragraph, preceding tokens, etc.).
no code implementations • SEMEVAL 2019 • Juri Opitz, Anette Frank
Secondly, we perform parse selection based on predicted parse accuracies of candidate parses from alternative systems, with the aim of improving overall results.
no code implementations • SEMEVAL 2019 • Juri Opitz, Anette Frank
Semantic proto-role labeling (SPRL) is an alternative to semantic role labeling (SRL) that moves beyond a categorical definition of roles, following Dowty's feature-based view of proto-roles.
no code implementations • COLING 2018 • Juri Opitz, Leo Born, Vivi Nastase
We induce and visualize a Knowledge Graph over the Regesta Imperii (RI), an important large-scale resource for medieval history research.
no code implementations • COLING 2018 • Juri Opitz, Anette Frank
The Winograd Schema Challenge targets pronominal anaphora resolution problems which require the application of cognitive inference in combination with world knowledge.
1 code implementation • EMNLP 2017 • Ana Marasović, Leo Born, Juri Opitz, Anette Frank
We found model variants that outperform the baselines for nominal anaphors, without training on individual anaphor data, but still lag behind for pronominal anaphors.
Ranked #1 on
Abstract Anaphora Resolution
on The ARRAU Corpus