Search Results for author: Roberto Dessì

Found 11 papers, 6 papers with code

Cross-Domain Image Captioning with Discriminative Finetuning

1 code implementation CVPR 2023 Roberto Dessì, Michele Bevilacqua, Eleonora Gualdoni, Nathanael Carraz Rakotonirina, Francesca Franzon, Marco Baroni

However, when the model is used without further tuning to generate captions for out-of-domain datasets, our discriminatively-finetuned captioner generates descriptions that resemble human references more than those produced by the same captioner without finetuning.

Descriptive Image Captioning

Can discrete information extraction prompts generalize across language models?

1 code implementation20 Feb 2023 Nathanaël Carraz Rakotonirina, Roberto Dessì, Fabio Petroni, Sebastian Riedel, Marco Baroni

We study whether automatically-induced prompts that effectively extract information from a language model can also be used, out-of-the-box, to probe other language models for the same information.

Language Modelling slot-filling +1

Referential communication in heterogeneous communities of pre-trained visual deep networks

1 code implementation4 Feb 2023 Matéo Mahaut, Francesca Franzon, Roberto Dessì, Marco Baroni

As a first step in this direction, we systematically explore the task of \textit{referential communication} in a community of heterogeneous state-of-the-art pre-trained visual networks, showing that they can develop, in a self-supervised way, a shared protocol to refer to a target object among a set of candidates.

Self-Driving Cars

Communication breakdown: On the low mutual intelligibility between human and neural captioning

1 code implementation20 Oct 2022 Roberto Dessì, Eleonora Gualdoni, Francesca Franzon, Gemma Boleda, Marco Baroni

We compare the 0-shot performance of a neural caption-based image retriever when given as input either human-produced captions or captions generated by a neural captioner.

Retrieval

Can Transformers Jump Around Right in Natural Language? Assessing Performance Transfer from SCAN

no code implementations EMNLP (BlackboxNLP) 2021 Rahma Chaabouni, Roberto Dessì, Eugene Kharitonov

We present several focused modifications of Transformer that greatly improve generalization capabilities on SCAN and select one that remains on par with a vanilla Transformer on a standard machine translation (MT) task.

Machine Translation Translation

Interpretable agent communication from scratch (with a generic visual processor emerging on the side)

1 code implementation NeurIPS 2021 Roberto Dessì, Eugene Kharitonov, Marco Baroni

As deep networks begin to be deployed as autonomous agents, the issue of how they can communicate with each other becomes important.

Self-Supervised Learning

Focus on What's Informative and Ignore What's not: Communication Strategies in a Referential Game

no code implementations5 Nov 2019 Roberto Dessì, Diane Bouchacourt, Davide Crepaldi, Marco Baroni

Research in multi-agent cooperation has shown that artificial agents are able to learn to play a simple referential game while developing a shared lexicon.

CNNs found to jump around more skillfully than RNNs: Compositional generalization in seq2seq convolutional networks

no code implementations ACL 2019 Roberto Dessì, Marco Baroni

Lake and Baroni (2018) introduced the SCAN dataset probing the ability of seq2seq models to capture compositional generalizations, such as inferring the meaning of "jump around" 0-shot from the component words.

Cannot find the paper you are looking for? You can Submit a new open access paper.