1 code implementation • NAACL (CMCL) 2021 • Mitja Nikolaus, Abdellah Fourtassi
When learning their native language, children acquire the meanings of words and sentences from highly ambiguous input without much explicit supervision.
1 code implementation • CoNLL (EMNLP) 2021 • Mitja Nikolaus, Abdellah Fourtassi
In this work, we propose a model integrating both perception- and production-based learning using artificial neural networks which we train on a large corpus of crowd-sourced images with corresponding descriptions.
no code implementations • 21 Mar 2024 • Mitja Nikolaus, Abhishek Agrawal, Petros Kaklamanis, Alex Warstadt, Abdellah Fourtassi
The acquisition of grammar has been a central question to adjudicate between theories of language acquisition.
no code implementations • 18 Mar 2024 • Mitja Nikolaus, Milad Mozafari, Nicholas Asher, Leila Reddy, Rufin VanRullen
Previous studies have shown that it is possible to map brain activation data of subjects viewing images onto the feature representation space of not only vision models (modality-specific decoding) but also language models (cross-modal decoding).
1 code implementation • 21 Oct 2022 • Mitja Nikolaus, Emmanuelle Salin, Stephane Ayache, Abdellah Fourtassi, Benoit Favre
Recent advances in vision-and-language modeling have seen the development of Transformer architectures that achieve remarkable performance on multimodal reasoning tasks.
1 code implementation • 25 Feb 2022 • Mitja Nikolaus, Afra Alishahi, Grzegorz Chrupała
In the real world the coupling between the linguistic and the visual modality is loose, and often confounded by correlations with non-semantic aspects of the speech signal.
1 code implementation • CONLL 2019 • Mitja Nikolaus, Mostafa Abdou, Matthew Lamm, Rahul Aralikatte, Desmond Elliott
Image captioning models are usually evaluated on their ability to describe a held-out set of images, not on their ability to generalize to unseen concepts.
no code implementations • WS 2019 • Joris Baan, Jana Leible, Mitja Nikolaus, David Rau, Dennis Ulmer, Tim Baumgärtner, Dieuwke Hupkes, Elia Bruni
We present a detailed comparison of two types of sequence to sequence models trained to conduct a compositional task.