Search Results for author: Juri Opitz

Found 23 papers, 10 papers with code

Explainable Unsupervised Argument Similarity Rating with Abstract Meaning Representation and Conclusion Generation

1 code implementation EMNLP (ArgMining) 2021 Juri Opitz, Philipp Heinisch, Philipp Wiesenbach, Philipp Cimiano, Anette Frank

When assessing the similarity of arguments, researchers typically use approaches that do not provide interpretable evidence or justifications for their ratings.

Overview of the 2022 Validity and Novelty Prediction Shared Task

1 code implementation ArgMining (ACL) 2022 Philipp Heinisch, Anette Frank, Juri Opitz, Moritz Plenz, Philipp Cimiano

This paper provides an overview of the Argument Validity and Novelty Prediction Shared Task that was organized as part of the 9th Workshop on Argument Mining (ArgMining 2022).

ValNov

Data Augmentation for Improving the Prediction of Validity and Novelty of Argumentative Conclusions

no code implementations ArgMining (ACL) 2022 Philipp Heinisch, Moritz Plenz, Juri Opitz, Anette Frank, Philipp Cimiano

Using only training data retrieved from related datasets by automatically labeling them for validity and novelty, combined with synthetic data, outperforms the baseline by 11. 5 points in F_1-score.

Data Augmentation

A Dynamic, Interpreted CheckList for Meaning-oriented NLG Metric Evaluation – through the Lens of Semantic Similarity Rating

no code implementations *SEM (NAACL) 2022 Laura Zeidler, Juri Opitz, Anette Frank

Evaluating the quality of generated text is difficult, since traditional NLG evaluation metrics, focusing more on surface form than meaning, often fail to assign appropriate scores. This is especially problematic for AMR-to-text evaluation, given the abstract nature of AMR. Our work aims to support the development and improvement of NLG evaluation metrics that focus on meaning by developing a dynamic CheckList for NLG metrics that is interpreted by being organized around meaning-relevant linguistic phenomena.

Semantic Similarity Semantic Textual Similarity

Better Smatch = Better Parser? AMR evaluation is not so simple anymore

1 code implementation12 Oct 2022 Juri Opitz, Anette Frank

Recently, astonishing advances have been observed in AMR parsing, as measured by the structural Smatch metric.

AMR Parsing

SBERT studies Meaning Representations: Decomposing Sentence Embeddings into Explainable Semantic Features

1 code implementation14 Jun 2022 Juri Opitz, Anette Frank

Models based on large-pretrained language models, such as S(entence)BERT, provide effective and efficient sentence embeddings that show high correlation to human similarity ratings, but lack interpretability.

Pretrained Language Models Sentence Embeddings +1

A Dynamic, Interpreted CheckList for Meaning-oriented NLG Metric Evaluation -- through the Lens of Semantic Similarity Rating

no code implementations24 May 2022 Laura Zeidler, Juri Opitz, Anette Frank

Our work aims to support the development and improvement of NLG evaluation metrics that focus on meaning, by developing a dynamic CheckList for NLG metrics that is interpreted by being organized around meaning-relevant linguistic phenomena.

Semantic Similarity Semantic Textual Similarity

SMARAGD: Synthesized sMatch for Accurate and Rapid AMR Graph Distance

no code implementations24 Mar 2022 Juri Opitz, Philipp Meier, Anette Frank

The semantic similarity of graph-based meaning representations, such as Abstract Meaning Representation (AMR), is typically assessed using graph matching algorithms, such as SMATCH (Cai and Knight, 2013).

Data Augmentation Graph Matching +4

Weisfeiler-Leman in the BAMBOO: Novel AMR Graph Metrics and a Benchmark for AMR Graph Similarity

no code implementations26 Aug 2021 Juri Opitz, Angel Daza, Anette Frank

In this work we propose new Weisfeiler-Leman AMR similarity metrics that unify the strengths of previous metrics, while mitigating their weaknesses.

Graph Similarity Sentence Similarity

Translate, then Parse! A strong baseline for Cross-Lingual AMR Parsing

1 code implementation ACL (IWPT) 2021 Sarah Uhrig, Yoalli Rezepka Garcia, Juri Opitz, Anette Frank

In cross-lingual Abstract Meaning Representation (AMR) parsing, researchers develop models that project sentences from various languages onto their AMRs to capture their essential semantic structures: given a sentence in any language, we aim to capture its core semantic content through concepts connected by manifold types of semantic relations.

AMR Parsing NMT

Towards a Decomposable Metric for Explainable Evaluation of Text Generation from AMR

1 code implementation EACL 2021 Juri Opitz, Anette Frank

Systems that generate natural language text from abstract meaning representations such as AMR are typically evaluated using automatic surface matching metrics that compare the generated texts to reference texts from which the input meaning representations were constructed.

Text Generation

AMR Quality Rating with a Lightweight CNN

1 code implementation Asian Chapter of the Association for Computational Linguistics 2020 Juri Opitz

Structured semantic sentence representations such as Abstract Meaning Representations (AMRs) are potentially useful in various NLP tasks.

AMR Similarity Metrics from Principles

3 code implementations29 Jan 2020 Juri Opitz, Letitia Parcalabescu, Anette Frank

Different metrics have been proposed to compare Abstract Meaning Representation (AMR) graphs.

Machine Translation Translation

Macro F1 and Macro F1

1 code implementation8 Nov 2019 Juri Opitz, Sebastian Burst

In this note, we show that only under rare circumstances the two computations can be considered equivalent.

Multi-Label Classification

Argumentative Relation Classification as Plausibility Ranking

no code implementations19 Sep 2019 Juri Opitz

We formulate argumentative relation classification (support vs. attack) as a text-plausibility ranking task.

Classification General Classification +1

Dissecting Content and Context in Argumentative Relation Analysis

no code implementations WS 2019 Juri Opitz, Anette Frank

When assessing relations between argumentative units (e. g., support or attack), computational systems often exploit disclosing indicators or markers that are not part of elementary argumentative units (EAUs) themselves, but are gained from their context (position in paragraph, preceding tokens, etc.).

Automatic Accuracy Prediction for AMR Parsing

no code implementations SEMEVAL 2019 Juri Opitz, Anette Frank

Secondly, we perform parse selection based on predicted parse accuracies of candidate parses from alternative systems, with the aim of improving overall results.

AMR Parsing

An Argument-Marker Model for Syntax-Agnostic Proto-Role Labeling

no code implementations SEMEVAL 2019 Juri Opitz, Anette Frank

Semantic proto-role labeling (SPRL) is an alternative to semantic role labeling (SRL) that moves beyond a categorical definition of roles, following Dowty's feature-based view of proto-roles.

Semantic Role Labeling

Induction of a Large-Scale Knowledge Graph from the Regesta Imperii

no code implementations COLING 2018 Juri Opitz, Leo Born, Vivi Nastase

We induce and visualize a Knowledge Graph over the Regesta Imperii (RI), an important large-scale resource for medieval history research.

Addressing the Winograd Schema Challenge as a Sequence Ranking Task

no code implementations COLING 2018 Juri Opitz, Anette Frank

The Winograd Schema Challenge targets pronominal anaphora resolution problems which require the application of cognitive inference in combination with world knowledge.

Coreference Resolution Language Modelling

A Mention-Ranking Model for Abstract Anaphora Resolution

1 code implementation EMNLP 2017 Ana Marasović, Leo Born, Juri Opitz, Anette Frank

We found model variants that outperform the baselines for nominal anaphors, without training on individual anaphor data, but still lag behind for pronominal anaphors.

Abstract Anaphora Resolution Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.