Search Results for author: Letitia Parcalabescu

Found 9 papers, 5 papers with code

On Measuring Faithfulness or Self-consistency of Natural Language Explanations

1 code implementation13 Nov 2023 Letitia Parcalabescu, Anette Frank

In this work we argue that these faithfulness tests do not measure faithfulness to the models' inner workings -- but rather their self-consistency at output level.

ViLMA: A Zero-Shot Benchmark for Linguistic and Temporal Grounding in Video-Language Models

no code implementations13 Nov 2023 Ilker Kesen, Andrea Pedrotti, Mustafa Dogan, Michele Cafagna, Emre Can Acikgoz, Letitia Parcalabescu, Iacer Calixto, Anette Frank, Albert Gatt, Aykut Erdem, Erkut Erdem

With the ever-increasing popularity of pretrained Video-Language Models (VidLMs), there is a pressing need to develop robust evaluation methodologies that delve deeper into their visio-linguistic capabilities.

counterfactual Language Modelling

MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision and Language Models & Tasks

1 code implementation15 Dec 2022 Letitia Parcalabescu, Anette Frank

We apply MM-SHAP in two ways: (1) to compare models for their average degree of multimodality, and (2) to measure for individual models the contribution of individual modalities for different tasks and datasets.

VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena

1 code implementation ACL 2022 Letitia Parcalabescu, Michele Cafagna, Lilitta Muradjan, Anette Frank, Iacer Calixto, Albert Gatt

We propose VALSE (Vision And Language Structured Evaluation), a novel benchmark designed for testing general-purpose pretrained vision and language (V&L) models for their visio-linguistic grounding capabilities on specific linguistic phenomena.

image-sentence alignment valid

What is Multimodality?

no code implementations ACL (mmsr, IWCS) 2021 Letitia Parcalabescu, Nils Trost, Anette Frank

The last years have shown rapid developments in the field of multimodal machine learning, combining e. g., vision, text or speech.

BIG-bench Machine Learning Position

Seeing past words: Testing the cross-modal capabilities of pretrained V&L models on counting tasks

no code implementations ACL (mmsr, IWCS) 2021 Letitia Parcalabescu, Albert Gatt, Anette Frank, Iacer Calixto

We investigate the reasoning ability of pretrained vision and language (V&L) models in two tasks that require multimodal integration: (1) discriminating a correct image-sentence pair from an incorrect one, and (2) counting entities in an image.

Sentence Task 2

AMR Similarity Metrics from Principles

3 code implementations29 Jan 2020 Juri Opitz, Letitia Parcalabescu, Anette Frank

Different metrics have been proposed to compare Abstract Meaning Representation (AMR) graphs.

Computational Efficiency Graph Matching +2

Cannot find the paper you are looking for? You can Submit a new open access paper.