no code implementations • CLASP 2022 • Claudio Greco, Alberto Testoni, Raffaella Bernardi, Stella Frank
Pre-trained Vision and Language Transformers achieve high performance on downstream tasks due to their ability to transfer representational knowledge accumulated during pretraining on substantial amounts of data.
1 code implementation • 24 Oct 2022 • Chen Qiu, Dan Oneata, Emanuele Bugliarello, Stella Frank, Desmond Elliott
We call this framework TD-MML: Translated Data for Multilingual Multimodal Learning, and it can be applied to any multimodal dataset and model.
Zero-Shot Cross-Lingual Image-to-Text Retrieval
Zero-Shot Cross-Lingual Text-to-Image Retrieval
+3
no code implementations • ACL 2022 • Daniel Hershcovich, Stella Frank, Heather Lent, Miryam de Lhoneux, Mostafa Abdou, Stephanie Brandl, Emanuele Bugliarello, Laura Cabello Piqueras, Ilias Chalkidis, Ruixiang Cui, Constanza Fierro, Katerina Margatina, Phillip Rust, Anders Søgaard
Various efforts in the Natural Language Processing (NLP) community have been made to accommodate linguistic diversity and serve speakers of many different languages.
no code implementations • CoNLL (EMNLP) 2021 • Mostafa Abdou, Artur Kulmizev, Daniel Hershcovich, Stella Frank, Ellie Pavlick, Anders Søgaard
Pretrained language models have been shown to encode relational information, such as the relations between entities or concepts in knowledge-bases -- (Paris, Capital, France).
2 code implementations • EMNLP 2021 • Stella Frank, Emanuele Bugliarello, Desmond Elliott
Models that have learned to construct cross-modal representations using both modalities are expected to perform worse when inputs are missing from a modality.
no code implementations • ACL 2020 • Alessandro Suglia, Ioannis Konstas, Andrea Vanzo, Emanuele Bastianelli, Desmond Elliott, Stella Frank, Oliver Lemon
To remedy this, we present GROLLA, an evaluation framework for Grounded Language Learning with Attributes with three sub-tasks: 1) Goal-oriented evaluation; 2) Object attribute prediction evaluation; and 3) Zero-shot evaluation.
no code implementations • 11 Oct 2019 • Shangmin Guo, Yi Ren, Serhii Havrylov, Stella Frank, Ivan Titov, Kenny Smith
Since first introduced, computer simulation has been an increasingly important tool in evolutionary linguistics.
1 code implementation • WS 2018 • Lo{\"\i}c Barrault, Fethi Bougares, Lucia Specia, Chiraag Lala, Desmond Elliott, Stella Frank
In this task a source sentence in English is supplemented by an image and participating systems are required to generate a translation for such a sentence into German, French or Czech.
no code implementations • WS 2017 • Desmond Elliott, Stella Frank, Loïc Barrault, Fethi Bougares, Lucia Specia
The multilingual image description task was changed such that at test time, only the image is given.
no code implementations • WS 2016 • Jan-Thorsten Peter, Tamer Alkhouli, Hermann Ney, Matthias Huck, Fabienne Braune, Alex Fraser, er, Ale{\v{s}} Tamchyna, Ond{\v{r}}ej Bojar, Barry Haddow, Rico Sennrich, Fr{\'e}d{\'e}ric Blain, Lucia Specia, Jan Niehues, Alex Waibel, Alex Allauzen, re, Lauriane Aufrant, Franck Burlot, Elena Knyazeva, Thomas Lavergne, Fran{\c{c}}ois Yvon, M{\=a}rcis Pinnis, Stella Frank
Ranked #12 on
Machine Translation
on WMT2016 English-Romanian
2 code implementations • WS 2016 • Desmond Elliott, Stella Frank, Khalil Sima'an, Lucia Specia
We introduce the Multi30K dataset to stimulate multilingual multimodal research.
1 code implementation • 15 Oct 2015 • Desmond Elliott, Stella Frank, Eva Hasler
In this paper we present an approach to multi-language image description bringing together insights from neural machine translation and neural image description.
no code implementations • WS 2015 • Joachim Daiber, Lautaro Quiroz, Roger Wechsler, Stella Frank
Compounding is a highly productive word-formation process in some languages that is often problematic for natural language processing applications.