Search Results for author: Matthieu Futeral

Found 5 papers, 2 papers with code

Towards Zero-Shot Multimodal Machine Translation

2 code implementations18 Jul 2024 Matthieu Futeral, Cordelia Schmid, Benoît Sagot, Rachel Bawden

Current multimodal machine translation (MMT) systems rely on fully supervised data (i. e models are trained on sentences with their translations and accompanying images).

Language Modelling Multimodal Machine Translation +1

mOSCAR: A Large-scale Multilingual and Multimodal Document-level Corpus

no code implementations13 Jun 2024 Matthieu Futeral, Armel Zebaze, Pedro Ortiz Suarez, Julien Abadji, Rémi Lacroix, Cordelia Schmid, Rachel Bawden, Benoît Sagot

We additionally train two types of multilingual model to prove the benefits of mOSCAR: (1) a model trained on a subset of mOSCAR and captioning data and (2) a model train on captioning data only.

Few-Shot Learning In-Context Learning

MAD Speech: Measures of Acoustic Diversity of Speech

no code implementations16 Apr 2024 Matthieu Futeral, Andrea Agostinelli, Marco Tagliasacchi, Neil Zeghidour, Eugene Kharitonov

Using these datasets, we demonstrate that our proposed metrics achieve a stronger agreement with the ground-truth diversity than baselines.

Diversity

Tackling Ambiguity with Images: Improved Multimodal Machine Translation and Contrastive Evaluation

2 code implementations20 Dec 2022 Matthieu Futeral, Cordelia Schmid, Ivan Laptev, Benoît Sagot, Rachel Bawden

One of the major challenges of machine translation (MT) is ambiguity, which can in some cases be resolved by accompanying context such as images.

Multimodal Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.