Search Results for author: Anette Frank

Found 74 papers, 29 papers with code

Overview of the 2022 Validity and Novelty Prediction Shared Task

1 code implementation ArgMining (ACL) 2022 Philipp Heinisch, Anette Frank, Juri Opitz, Moritz Plenz, Philipp Cimiano

This paper provides an overview of the Argument Validity and Novelty Prediction Shared Task that was organized as part of the 9th Workshop on Argument Mining (ArgMining 2022).

Binary Classification ValNov

Data Augmentation for Improving the Prediction of Validity and Novelty of Argumentative Conclusions

no code implementations ArgMining (ACL) 2022 Philipp Heinisch, Moritz Plenz, Juri Opitz, Anette Frank, Philipp Cimiano

Using only training data retrieved from related datasets by automatically labeling them for validity and novelty, combined with synthetic data, outperforms the baseline by 11. 5 points in F_1-score.

Data Augmentation

Reconstructing Implicit Knowledge with Language Models

1 code implementation NAACL (DeeLIO) 2021 Maria Becker, Siting Liang, Anette Frank

In this work we propose an approach for generating statements that explicate implicit knowledge connecting sentences in text.

Sentence

Explainable Unsupervised Argument Similarity Rating with Abstract Meaning Representation and Conclusion Generation

1 code implementation EMNLP (ArgMining) 2021 Juri Opitz, Philipp Heinisch, Philipp Wiesenbach, Philipp Cimiano, Anette Frank

When assessing the similarity of arguments, researchers typically use approaches that do not provide interpretable evidence or justifications for their ratings.

A Dynamic, Interpreted CheckList for Meaning-oriented NLG Metric Evaluation – through the Lens of Semantic Similarity Rating

no code implementations *SEM (NAACL) 2022 Laura Zeidler, Juri Opitz, Anette Frank

Evaluating the quality of generated text is difficult, since traditional NLG evaluation metrics, focusing more on surface form than meaning, often fail to assign appropriate scores. This is especially problematic for AMR-to-text evaluation, given the abstract nature of AMR. Our work aims to support the development and improvement of NLG evaluation metrics that focus on meaning by developing a dynamic CheckList for NLG metrics that is interpreted by being organized around meaning-relevant linguistic phenomena.

nlg evaluation Semantic Similarity +1

Clinical information extraction for Low-resource languages with Few-shot learning using Pre-trained language models and Prompting

no code implementations20 Mar 2024 Phillip Richter-Pechanski, Philipp Wiesenbach, Dominic M. Schwab, Christina Kiriakou, Nicolas Geis, Christoph Dieterich, Anette Frank

Automatic extraction of medical information from clinical documents poses several challenges: high costs of required clinical expertise, limited interpretability of model predictions, restricted computational resources and privacy regulations.

Domain Adaptation Few-Shot Learning

Exploring Continual Learning of Compositional Generalization in NLI

no code implementations7 Mar 2024 Xiyan Fu, Anette Frank

In this paper, we introduce the Continual Compositional Generalization in Inference (C2Gen NLI) challenge, where a model continuously acquires knowledge of constituting primitive inference tasks as a basis for compositional inferences.

Continual Learning Natural Language Inference

Graph Language Models

1 code implementation13 Jan 2024 Moritz Plenz, Anette Frank

In our work we introduce a novel LM type, the Graph Language Model (GLM), that integrates the strengths of both approaches and mitigates their weaknesses.

Knowledge Graphs Language Modelling +1

On Measuring Faithfulness or Self-consistency of Natural Language Explanations

1 code implementation13 Nov 2023 Letitia Parcalabescu, Anette Frank

In this work we argue that these faithfulness tests do not measure faithfulness to the models' inner workings -- but rather their self-consistency at output level.

ViLMA: A Zero-Shot Benchmark for Linguistic and Temporal Grounding in Video-Language Models

no code implementations13 Nov 2023 Ilker Kesen, Andrea Pedrotti, Mustafa Dogan, Michele Cafagna, Emre Can Acikgoz, Letitia Parcalabescu, Iacer Calixto, Anette Frank, Albert Gatt, Aykut Erdem, Erkut Erdem

With the ever-increasing popularity of pretrained Video-Language Models (VidLMs), there is a pressing need to develop robust evaluation methodologies that delve deeper into their visio-linguistic capabilities.

counterfactual Language Modelling

Dynamic MOdularized Reasoning for Compositional Structured Explanation Generation

no code implementations14 Sep 2023 Xiyan Fu, Anette Frank

Hence, we propose a dynamic modularized reasoning model, MORSE, to improve the compositional generalization of neural models.

Explanation Generation

Graecia capta ferum victorem cepit. Detecting Latin Allusions to Ancient Greek Literature

1 code implementation23 Aug 2023 Frederick Riemenschneider, Anette Frank

In this study, we introduce SPhilBERTa, a trilingual Sentence-RoBERTa model tailored for Classical Philology, which excels at cross-lingual semantic comprehension and identification of identical sentences across Ancient Greek, Latin, and English.

Sentence

AMR4NLI: Interpretable and robust NLI measures from semantic graphs

1 code implementation1 Jun 2023 Juri Opitz, Shira Wein, Julius Steen, Anette Frank, Nathan Schneider

The task of natural language inference (NLI) asks whether a given premise (expressed in NL) entails a given NL hypothesis.

Natural Language Inference Sentence

SETI: Systematicity Evaluation of Textual Inference

no code implementations24 May 2023 Xiyan Fu, Anette Frank

We propose SETI (Systematicity Evaluation of Textual Inference), a novel and comprehensive benchmark designed for evaluating pre-trained language models (PLMs) for their systematicity capabilities in the domain of textual inference.

Exploring Large Language Models for Classical Philology

1 code implementation23 May 2023 Frederick Riemenschneider, Anette Frank

While prior work on Classical languages unanimously uses BERT, in this work we create four language models for Ancient Greek that vary along two dimensions to study their versatility for tasks of interest for Classical languages: we explore (i) encoder-only and encoder-decoder architectures using RoBERTa and T5 as strong model types, and create for each of them (ii) a monolingual Ancient Greek and a multilingual instance that includes Latin and English.

Benchmarking Lemmatization

MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision and Language Models & Tasks

1 code implementation15 Dec 2022 Letitia Parcalabescu, Anette Frank

We apply MM-SHAP in two ways: (1) to compare models for their average degree of multimodality, and (2) to measure for individual models the contribution of individual modalities for different tasks and datasets.

Better Smatch = Better Parser? AMR evaluation is not so simple anymore

1 code implementation12 Oct 2022 Juri Opitz, Anette Frank

Recently, astonishing advances have been observed in AMR parsing, as measured by the structural Smatch metric.

AMR Parsing Sentence

SBERT studies Meaning Representations: Decomposing Sentence Embeddings into Explainable Semantic Features

1 code implementation14 Jun 2022 Juri Opitz, Anette Frank

Models based on large-pretrained language models, such as S(entence)BERT, provide effective and efficient sentence embeddings that show high correlation to human similarity ratings, but lack interpretability.

Negation Sentence +2

A Dynamic, Interpreted CheckList for Meaning-oriented NLG Metric Evaluation -- through the Lens of Semantic Similarity Rating

no code implementations24 May 2022 Laura Zeidler, Juri Opitz, Anette Frank

Our work aims to support the development and improvement of NLG evaluation metrics that focus on meaning, by developing a dynamic CheckList for NLG metrics that is interpreted by being organized around meaning-relevant linguistic phenomena.

nlg evaluation Semantic Similarity +1

SMARAGD: Learning SMatch for Accurate and Rapid Approximate Graph Distance

1 code implementation24 Mar 2022 Juri Opitz, Philipp Meier, Anette Frank

The similarity of graph structures, such as Meaning Representations (MRs), is often assessed via structural matching algorithms, such as Smatch (Cai and Knight, 2013).

Clustering Data Augmentation +6

VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena

1 code implementation ACL 2022 Letitia Parcalabescu, Michele Cafagna, Lilitta Muradjan, Anette Frank, Iacer Calixto, Albert Gatt

We propose VALSE (Vision And Language Structured Evaluation), a novel benchmark designed for testing general-purpose pretrained vision and language (V&L) models for their visio-linguistic grounding capabilities on specific linguistic phenomena.

image-sentence alignment valid

Weisfeiler-Leman in the BAMBOO: Novel AMR Graph Metrics and a Benchmark for AMR Graph Similarity

no code implementations26 Aug 2021 Juri Opitz, Angel Daza, Anette Frank

In this work we propose new Weisfeiler-Leman AMR similarity metrics that unify the strengths of previous metrics, while mitigating their weaknesses.

Graph Matching Graph Similarity +2

Translate, then Parse! A strong baseline for Cross-Lingual AMR Parsing

1 code implementation ACL (IWPT) 2021 Sarah Uhrig, Yoalli Rezepka Garcia, Juri Opitz, Anette Frank

In cross-lingual Abstract Meaning Representation (AMR) parsing, researchers develop models that project sentences from various languages onto their AMRs to capture their essential semantic structures: given a sentence in any language, we aim to capture its core semantic content through concepts connected by manifold types of semantic relations.

AMR Parsing NMT +1

Generating Hypothetical Events for Abductive Inference

1 code implementation Joint Conference on Lexical and Computational Semantics 2021 Debjit Paul, Anette Frank

This work offers the first study of how such knowledge impacts the Abductive NLI task -- which consists in choosing the more likely explanation for given observations.

Language Modelling

COINS: Dynamically Generating COntextualized Inference Rules for Narrative Story Completion

1 code implementation ACL 2021 Debjit Paul, Anette Frank

Despite recent successes of large pre-trained language models in solving reasoning tasks, their inference capabilities remain opaque.

Sentence Story Completion

CO-NNECT: A Framework for Revealing Commonsense Knowledge Paths as Explicitations of Implicit Knowledge in Texts

1 code implementation IWCS (ACL) 2021 Maria Becker, Katharina Korfhage, Debjit Paul, Anette Frank

We conduct evaluations on two argumentative datasets and show that a combination of the two model types generates meaningful, high-quality knowledge paths between sentences that reveal implicit knowledge conveyed in text.

Relation

COCO-EX: A Tool for Linking Concepts from Texts to ConceptNet

no code implementations EACL 2021 Maria Becker, Katharina Korfhage, Anette Frank

COCO-EX extracts meaningful concepts from natural language texts and maps them to conjunct concept nodes in ConceptNet, utilizing the maximum of relational information stored in the ConceptNet knowledge graph.

Knowledge Graphs

What is Multimodality?

no code implementations ACL (mmsr, IWCS) 2021 Letitia Parcalabescu, Nils Trost, Anette Frank

The last years have shown rapid developments in the field of multimodal machine learning, combining e. g., vision, text or speech.

BIG-bench Machine Learning Position

Seeing past words: Testing the cross-modal capabilities of pretrained V&L models on counting tasks

no code implementations ACL (mmsr, IWCS) 2021 Letitia Parcalabescu, Albert Gatt, Anette Frank, Iacer Calixto

We investigate the reasoning ability of pretrained vision and language (V&L) models in two tasks that require multimodal integration: (1) discriminating a correct image-sentence pair from an incorrect one, and (2) counting entities in an image.

Sentence Task 2

Social Commonsense Reasoning with Multi-Head Knowledge Attention

1 code implementation Findings of the Association for Computational Linguistics 2020 Debjit Paul, Anette Frank

Notably we are, to the best of our knowledge, the first to demonstrate that a model that learns to perform counterfactual reasoning helps predicting the best explanation in an abductive reasoning task.

counterfactual Counterfactual Reasoning +1

X-SRL: A Parallel Cross-Lingual Semantic Role Labeling Dataset

1 code implementation EMNLP 2020 Angel Daza, Anette Frank

Even though SRL is researched for many languages, major improvements have mostly been obtained for English, for which more resources are available.

Machine Translation Semantic Role Labeling +1

Towards a Decomposable Metric for Explainable Evaluation of Text Generation from AMR

1 code implementation EACL 2021 Juri Opitz, Anette Frank

Systems that generate natural language text from abstract meaning representations such as AMR are typically evaluated using automatic surface matching metrics that compare the generated texts to reference texts from which the input meaning representations were constructed.

Sentence Text Generation

AMR Similarity Metrics from Principles

3 code implementations29 Jan 2020 Juri Opitz, Letitia Parcalabescu, Anette Frank

Different metrics have been proposed to compare Abstract Meaning Representation (AMR) graphs.

Computational Efficiency Graph Matching +2

Implicit Knowledge in Argumentative Texts: An Annotated Corpus

no code implementations LREC 2020 Maria Becker, Katharina Korfhage, Anette Frank

When speaking or writing, people omit information that seems clear and evident, such that only part of the message is expressed in words.

Translate and Label! An Encoder-Decoder Approach for Cross-lingual Semantic Role Labeling

1 code implementation IJCNLP 2019 Angel Daza, Anette Frank

Finally, we measure the effectiveness of our method by using the generated data to augment the training basis for resource-poor languages and perform manual evaluation to show that it produces high-quality sentences and assigns accurate semantic role annotations.

Semantic Role Labeling

Discourse-Aware Semantic Self-Attention for Narrative Reading Comprehension

1 code implementation IJCNLP 2019 Todor Mihaylov, Anette Frank

In this work, we propose to use linguistic annotations as a basis for a \textit{Discourse-Aware Semantic Self-Attention} encoder that we employ for reading comprehension on long narrative texts.

Reading Comprehension Sentence

Dissecting Content and Context in Argumentative Relation Analysis

no code implementations WS 2019 Juri Opitz, Anette Frank

When assessing relations between argumentative units (e. g., support or attack), computational systems often exploit disclosing indicators or markers that are not part of elementary argumentative units (EAUs) themselves, but are gained from their context (position in paragraph, preceding tokens, etc.).

Relation

Automatic Accuracy Prediction for AMR Parsing

no code implementations SEMEVAL 2019 Juri Opitz, Anette Frank

Secondly, we perform parse selection based on predicted parse accuracies of candidate parses from alternative systems, with the aim of improving overall results.

AMR Parsing

Ranking and Selecting Multi-Hop Knowledge Paths to Better Predict Human Needs

1 code implementation NAACL 2019 Debjit Paul, Anette Frank

To make machines better understand sentiments, research needs to move from polarity identification to understanding the reasons that underlie the expression of sentiment.

Common Sense Reasoning

An Argument-Marker Model for Syntax-Agnostic Proto-Role Labeling

no code implementations SEMEVAL 2019 Juri Opitz, Anette Frank

Semantic proto-role labeling (SPRL) is an alternative to semantic role labeling (SRL) that moves beyond a categorical definition of roles, following Dowty's feature-based view of proto-roles.

Semantic Role Labeling

Addressing the Winograd Schema Challenge as a Sequence Ranking Task

no code implementations COLING 2018 Juri Opitz, Anette Frank

The Winograd Schema Challenge targets pronominal anaphora resolution problems which require the application of cognitive inference in combination with world knowledge.

Coreference Resolution Language Modelling +1

A Sequence-to-Sequence Model for Semantic Role Labeling

no code implementations WS 2018 Angel Daza, Anette Frank

We explore a novel approach for Semantic Role Labeling (SRL) by casting it as a sequence-to-sequence process.

Benchmarking Semantic Role Labeling

Knowledgeable Reader: Enhancing Cloze-Style Reading Comprehension with External Commonsense Knowledge

no code implementations ACL 2018 Todor Mihaylov, Anette Frank

We introduce a neural reading comprehension model that integrates external commonsense knowledge, encoded as a key-value memory, in a cloze-style setting.

Reading Comprehension

SRL4ORL: Improving Opinion Role Labeling using Multi-task Learning with Semantic Role Labeling

1 code implementation NAACL 2018 Ana Marasović, Anette Frank

For over a decade, machine learning has been used to extract opinion-holder-target structures from text to answer the question "Who expressed what kind of sentiment towards what?".

Ranked #2 on Fine-Grained Opinion Analysis on MPQA (using extra training data)

Fine-Grained Opinion Analysis Multi-Task Learning

What do we need to know about an unknown word when parsing German

no code implementations WS 2017 Bich-Ngoc Do, Ines Rehbein, Anette Frank

We propose a new type of subword embedding designed to provide more information about unknown compounds, a major source for OOV words in German.

Language Modelling POS +1

A Mention-Ranking Model for Abstract Anaphora Resolution

1 code implementation EMNLP 2017 Ana Marasović, Leo Born, Juri Opitz, Anette Frank

We found model variants that outperform the baselines for nominal anaphors, without training on individual anaphor data, but still lag behind for pronominal anaphors.

Abstract Anaphora Resolution Representation Learning +1

Story Cloze Ending Selection Baselines and Data Examination

no code implementations WS 2017 Todor Mihaylov, Anette Frank

This paper describes two supervised baseline systems for the Story Cloze Test Shared Task (Mostafazadeh et al., 2016a).

Cloze Test Semantic Similarity +2

Multilingual Modal Sense Classification using a Convolutional Neural Network

no code implementations WS 2016 Ana Marasović, Anette Frank

Modal sense classification (MSC) is a special WSD task that depends on the meaning of the proposition in the modal's scope.

Classification General Classification

Combining Semantic Annotation of Word Sense \& Semantic Roles: A Novel Annotation Scheme for VerbNet Roles on German Language Data

no code implementations LREC 2016 {\'E}va M{\'u}jdricza-Maydt, Silvana Hartmann, Iryna Gurevych, Anette Frank

We present a VerbNet-based annotation scheme for semantic roles that we explore in an annotation study on German language data that combines word sense and semantic role annotation.

Cannot find the paper you are looking for? You can Submit a new open access paper.