Search Results for author: Stella Frank

Found 19 papers, 5 papers with code

Findings of the Third Shared Task on Multimodal Machine Translation

1 code implementation WS 2018 Lo{\"\i}c Barrault, Fethi Bougares, Lucia Specia, Chiraag Lala, Desmond Elliott, Stella Frank

In this task a source sentence in English is supplemented by an image and participating systems are required to generate a translation for such a sentence into German, French or Czech.

Multimodal Machine Translation Sentence +1

Vision-and-Language or Vision-for-Language? On Cross-Modal Influence in Multimodal Transformers

4 code implementations EMNLP 2021 Stella Frank, Emanuele Bugliarello, Desmond Elliott

Models that have learned to construct cross-modal representations using both modalities are expected to perform worse when inputs are missing from a modality.

Language Modelling

Multilingual Image Description with Neural Sequence Models

1 code implementation15 Oct 2015 Desmond Elliott, Stella Frank, Eva Hasler

In this paper we present an approach to multi-language image description bringing together insights from neural machine translation and neural image description.

Image Captioning Translation

Splitting Compounds by Semantic Analogy

no code implementations WS 2015 Joachim Daiber, Lautaro Quiroz, Roger Wechsler, Stella Frank

Compounding is a highly productive word-formation process in some languages that is often problematic for natural language processing applications.

Machine Translation Translation +1

CompGuessWhat?!: A Multi-task Evaluation Framework for Grounded Language Learning

no code implementations ACL 2020 Alessandro Suglia, Ioannis Konstas, Andrea Vanzo, Emanuele Bastianelli, Desmond Elliott, Stella Frank, Oliver Lemon

To remedy this, we present GROLLA, an evaluation framework for Grounded Language Learning with Attributes with three sub-tasks: 1) Goal-oriented evaluation; 2) Object attribute prediction evaluation; and 3) Zero-shot evaluation.

Attribute Grounded language learning

Can Language Models Encode Perceptual Structure Without Grounding? A Case Study in Color

no code implementations CoNLL (EMNLP) 2021 Mostafa Abdou, Artur Kulmizev, Daniel Hershcovich, Stella Frank, Ellie Pavlick, Anders Søgaard

Pretrained language models have been shown to encode relational information, such as the relations between entities or concepts in knowledge-bases -- (Paris, Capital, France).

A Small but Informed and Diverse Model: The Case of the Multimodal GuessWhat!? Guessing Game

no code implementations CLASP 2022 Claudio Greco, Alberto Testoni, Raffaella Bernardi, Stella Frank

Pre-trained Vision and Language Transformers achieve high performance on downstream tasks due to their ability to transfer representational knowledge accumulated during pretraining on substantial amounts of data.

Cannot find the paper you are looking for? You can Submit a new open access paper.