Search Results for author: Letitia Parcalabescu

Found 7 papers, 4 papers with code

MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision and Language Models & Tasks

1 code implementation15 Dec 2022 Letitia Parcalabescu, Anette Frank

But how to quantify the amount of unimodal collapse reliably, at dataset and instance-level, to diagnose and combat unimodal collapse in a targeted way?

VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena

1 code implementation ACL 2022 Letitia Parcalabescu, Michele Cafagna, Lilitta Muradjan, Anette Frank, Iacer Calixto, Albert Gatt

We propose VALSE (Vision And Language Structured Evaluation), a novel benchmark designed for testing general-purpose pretrained vision and language (V&L) models for their visio-linguistic grounding capabilities on specific linguistic phenomena.

image-sentence alignment

What is Multimodality?

no code implementations ACL (mmsr, IWCS) 2021 Letitia Parcalabescu, Nils Trost, Anette Frank

The last years have shown rapid developments in the field of multimodal machine learning, combining e. g., vision, text or speech.

BIG-bench Machine Learning

Seeing past words: Testing the cross-modal capabilities of pretrained V&L models on counting tasks

no code implementations ACL (mmsr, IWCS) 2021 Letitia Parcalabescu, Albert Gatt, Anette Frank, Iacer Calixto

We investigate the reasoning ability of pretrained vision and language (V&L) models in two tasks that require multimodal integration: (1) discriminating a correct image-sentence pair from an incorrect one, and (2) counting entities in an image.

AMR Similarity Metrics from Principles

3 code implementations29 Jan 2020 Juri Opitz, Letitia Parcalabescu, Anette Frank

Different metrics have been proposed to compare Abstract Meaning Representation (AMR) graphs.

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.