Search Results for author: Emanuele Bugliarello

Found 15 papers, 11 papers with code

Language Modelling with Pixels

1 code implementation14 Jul 2022 Phillip Rust, Jonas F. Lotz, Emanuele Bugliarello, Elizabeth Salesky, Miryam de Lhoneux, Desmond Elliott

Language models are defined over a finite set of inputs, which creates a vocabulary bottleneck when we attempt to scale the number of supported languages.

Language Modelling Named Entity Recognition (NER)

Ancestor-to-Creole Transfer is Not a Walk in the Park

no code implementations insights (ACL) 2022 Heather Lent, Emanuele Bugliarello, Anders Søgaard

We aim to learn language models for Creole languages for which large volumes of data are not readily available, and therefore explore the potential transfer from ancestor languages (the 'Ancestry Transfer Hypothesis').

Rethinking Evaluation Practices in Visual Question Answering: A Case Study on Out-of-Distribution Generalization

no code implementations24 May 2022 Aishwarya Agrawal, Ivana Kajić, Emanuele Bugliarello, Elnaz Davoodi, Anita Gergely, Phil Blunsom, Aida Nematzadeh

Vision-and-language (V&L) models pretrained on large-scale multimodal data have demonstrated strong performance on various tasks such as image captioning and visual question answering (VQA).

Image Captioning Out-of-Distribution Generalization +4

Mostra: A Flexible Balancing Framework to Trade-off User, Artist and Platform Objectives for Music Sequencing

no code implementations22 Apr 2022 Emanuele Bugliarello, Rishabh Mehrotra, James Kirk, Mounia Lalmas

We consider the task of sequencing tracks on music streaming platforms where the goal is to maximise not only user satisfaction, but also artist- and platform-centric objectives, needed to ensure long-term health and sustainability of the platform.

IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages

2 code implementations27 Jan 2022 Emanuele Bugliarello, Fangyu Liu, Jonas Pfeiffer, Siva Reddy, Desmond Elliott, Edoardo Maria Ponti, Ivan Vulić

Our benchmark enables the evaluation of multilingual multimodal models for transfer learning, not only in a zero-shot setting, but also in newly defined few-shot learning setups.

Cross-Modal Retrieval Few-Shot Learning +6

Visually Grounded Reasoning across Languages and Cultures

2 code implementations EMNLP 2021 Fangyu Liu, Emanuele Bugliarello, Edoardo Maria Ponti, Siva Reddy, Nigel Collier, Desmond Elliott

The design of widespread vision-and-language datasets and pre-trained encoders directly adopts, or draws inspiration from, the concepts and images of ImageNet.

Visual Reasoning Zero-Shot Learning

On Language Models for Creoles

1 code implementation CoNLL (EMNLP) 2021 Heather Lent, Emanuele Bugliarello, Miryam de Lhoneux, Chen Qiu, Anders Søgaard

Creole languages such as Nigerian Pidgin English and Haitian Creole are under-resourced and largely ignored in the NLP literature.

Vision-and-Language or Vision-for-Language? On Cross-Modal Influence in Multimodal Transformers

2 code implementations EMNLP 2021 Stella Frank, Emanuele Bugliarello, Desmond Elliott

Models that have learned to construct cross-modal representations using both modalities are expected to perform worse when inputs are missing from a modality.

Language Modelling

The Role of Syntactic Planning in Compositional Image Captioning

1 code implementation EACL 2021 Emanuele Bugliarello, Desmond Elliott

Image captioning has focused on generalizing to images drawn from the same distribution as the training set, and not to the more challenging problem of generalizing to different distributions of images.

Image Captioning

Multimodal Pretraining Unmasked: A Meta-Analysis and a Unified Framework of Vision-and-Language BERTs

2 code implementations30 Nov 2020 Emanuele Bugliarello, Ryan Cotterell, Naoaki Okazaki, Desmond Elliott

Large-scale pretraining and task-specific fine-tuning is now the standard methodology for many tasks in computer vision and natural language processing.

Enhancing Machine Translation with Dependency-Aware Self-Attention

1 code implementation ACL 2020 Emanuele Bugliarello, Naoaki Okazaki

Most neural machine translation models only rely on pairs of parallel sentences, assuming syntactic information is automatically learned by an attention mechanism.

Machine Translation Translation

Matrix Completion in the Unit Hypercube via Structured Matrix Factorization

1 code implementation30 May 2019 Emanuele Bugliarello, Swayambhoo Jain, Vineeth Rakesh

We tackle this challenge by using a two-fold approach: first, we transform this task into a constrained matrix completion problem with entries bounded in the unit interval [0, 1]; second, we propose two novel matrix factorization models that leverage our knowledge of the VFX environment.

Matrix Completion

Cannot find the paper you are looking for? You can Submit a new open access paper.