Search Results for author: Emanuele Bugliarello

Found 20 papers, 15 papers with code

Evaluating Bias and Fairness in Gender-Neutral Pretrained Vision-and-Language Models

1 code implementation26 Oct 2023 Laura Cabello, Emanuele Bugliarello, Stephanie Brandl, Desmond Elliott

We quantify bias amplification in pretraining and after fine-tuning on three families of vision-and-language models.

Fairness Retrieval

On the Interplay between Fairness and Explainability

no code implementations25 Oct 2023 Stephanie Brandl, Emanuele Bugliarello, Ilias Chalkidis

In order to build reliable and trustworthy NLP applications, models need to be both fair across different demographics and explainable.

Fairness Multi Class Text Classification +2

Weakly-Supervised Learning of Visual Relations in Multimodal Pretraining

1 code implementation23 May 2023 Emanuele Bugliarello, Aida Nematzadeh, Lisa Anne Hendricks

Recent work in vision-and-language pretraining has investigated supervised signals from object detection data to learn better, fine-grained multimodal representations.

object-detection Object Detection +2

Measuring Progress in Fine-grained Vision-and-Language Understanding

2 code implementations12 May 2023 Emanuele Bugliarello, Laurent Sartran, Aishwarya Agrawal, Lisa Anne Hendricks, Aida Nematzadeh

While pretraining on large-scale image-text data from the Web has facilitated rapid progress on many vision-and-language (V&L) tasks, recent work has demonstrated that pretrained models lack "fine-grained" understanding, such as the ability to recognise relationships, verbs, and numbers in images.

Visual Reasoning

Language Modelling with Pixels

1 code implementation14 Jul 2022 Phillip Rust, Jonas F. Lotz, Emanuele Bugliarello, Elizabeth Salesky, Miryam de Lhoneux, Desmond Elliott

We pretrain the 86M parameter PIXEL model on the same English data as BERT and evaluate on syntactic and semantic tasks in typologically diverse languages, including various non-Latin scripts.

Language Modelling Named Entity Recognition (NER)

Ancestor-to-Creole Transfer is Not a Walk in the Park

no code implementations insights (ACL) 2022 Heather Lent, Emanuele Bugliarello, Anders Søgaard

We aim to learn language models for Creole languages for which large volumes of data are not readily available, and therefore explore the potential transfer from ancestor languages (the 'Ancestry Transfer Hypothesis').

Reassessing Evaluation Practices in Visual Question Answering: A Case Study on Out-of-Distribution Generalization

no code implementations24 May 2022 Aishwarya Agrawal, Ivana Kajić, Emanuele Bugliarello, Elnaz Davoodi, Anita Gergely, Phil Blunsom, Aida Nematzadeh

Vision-and-language (V&L) models pretrained on large-scale multimodal data have demonstrated strong performance on various tasks such as image captioning and visual question answering (VQA).

Image Captioning Out-of-Distribution Generalization +3

Mostra: A Flexible Balancing Framework to Trade-off User, Artist and Platform Objectives for Music Sequencing

no code implementations22 Apr 2022 Emanuele Bugliarello, Rishabh Mehrotra, James Kirk, Mounia Lalmas

We consider the task of sequencing tracks on music streaming platforms where the goal is to maximise not only user satisfaction, but also artist- and platform-centric objectives, needed to ensure long-term health and sustainability of the platform.

IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages

3 code implementations27 Jan 2022 Emanuele Bugliarello, Fangyu Liu, Jonas Pfeiffer, Siva Reddy, Desmond Elliott, Edoardo Maria Ponti, Ivan Vulić

Our benchmark enables the evaluation of multilingual multimodal models for transfer learning, not only in a zero-shot setting, but also in newly defined few-shot learning setups.

Cross-Modal Retrieval Few-Shot Learning +5

Visually Grounded Reasoning across Languages and Cultures

3 code implementations EMNLP 2021 Fangyu Liu, Emanuele Bugliarello, Edoardo Maria Ponti, Siva Reddy, Nigel Collier, Desmond Elliott

The design of widespread vision-and-language datasets and pre-trained encoders directly adopts, or draws inspiration from, the concepts and images of ImageNet.

Visual Reasoning Zero-Shot Learning

On Language Models for Creoles

1 code implementation CoNLL (EMNLP) 2021 Heather Lent, Emanuele Bugliarello, Miryam de Lhoneux, Chen Qiu, Anders Søgaard

Creole languages such as Nigerian Pidgin English and Haitian Creole are under-resourced and largely ignored in the NLP literature.

Vision-and-Language or Vision-for-Language? On Cross-Modal Influence in Multimodal Transformers

4 code implementations EMNLP 2021 Stella Frank, Emanuele Bugliarello, Desmond Elliott

Models that have learned to construct cross-modal representations using both modalities are expected to perform worse when inputs are missing from a modality.

Language Modelling

The Role of Syntactic Planning in Compositional Image Captioning

1 code implementation EACL 2021 Emanuele Bugliarello, Desmond Elliott

Image captioning has focused on generalizing to images drawn from the same distribution as the training set, and not to the more challenging problem of generalizing to different distributions of images.

Image Captioning

Multimodal Pretraining Unmasked: A Meta-Analysis and a Unified Framework of Vision-and-Language BERTs

3 code implementations30 Nov 2020 Emanuele Bugliarello, Ryan Cotterell, Naoaki Okazaki, Desmond Elliott

Large-scale pretraining and task-specific fine-tuning is now the standard methodology for many tasks in computer vision and natural language processing.

Enhancing Machine Translation with Dependency-Aware Self-Attention

1 code implementation ACL 2020 Emanuele Bugliarello, Naoaki Okazaki

Most neural machine translation models only rely on pairs of parallel sentences, assuming syntactic information is automatically learned by an attention mechanism.

Machine Translation Translation

Matrix Completion in the Unit Hypercube via Structured Matrix Factorization

1 code implementation30 May 2019 Emanuele Bugliarello, Swayambhoo Jain, Vineeth Rakesh

We tackle this challenge by using a two-fold approach: first, we transform this task into a constrained matrix completion problem with entries bounded in the unit interval [0, 1]; second, we propose two novel matrix factorization models that leverage our knowledge of the VFX environment.

Matrix Completion

Cannot find the paper you are looking for? You can Submit a new open access paper.