1 code implementation • 14 Mar 2022 • Danny Merkx, Sebastiaan Scholten, Stefan L. Frank, Mirjam Ernestus, Odette Scharenborg
We furthermore investigate whether vector quantisation, a technique for discrete representation learning, aids the model in the discovery and recognition of words.
1 code implementation • CMCL (ACL) 2022 • Danny Merkx, Stefan L. Frank, Mirjam Ernestus
In this paper we create visually grounded word embeddings by combining English text and images and compare them to popular text-based methods, to see if visual information allows our model to better capture cognitive aspects of word meaning.
1 code implementation • 16 Jun 2021 • Danny Merkx, Stefan L. Frank, Mirjam Ernestus
This study addresses the question whether visually grounded speech recognition (VGS) models learn to capture sentence semantics without access to any prior linguistic knowledge.
no code implementations • 31 May 2020 • Sebastiaan Scholten, Danny Merkx, Odette Scharenborg
We investigated word recognition in a Visually Grounded Speech model.
1 code implementation • NAACL (CMCL) 2021 • Danny Merkx, Stefan L. Frank
Recurrent neural networks (RNNs) have long been an architecture of interest for computational models of human sentence processing.
1 code implementation • 9 Sep 2019 • Danny Merkx, Stefan L. Frank, Mirjam Ernestus
Humans learn language by interaction with their environment and listening to other humans.
1 code implementation • 27 Mar 2019 • Danny Merkx, Stefan Frank
The system achieves state-of-the-art results on several of these benchmarks, which shows that a system trained solely on multimodal data, without assuming any word representations, is able to capture sentence level semantics.
Grounded language learning Learning Semantic Representations +7
no code implementations • 14 Feb 2018 • Odette Scharenborg, Laurent Besacier, Alan Black, Mark Hasegawa-Johnson, Florian Metze, Graham Neubig, Sebastian Stueker, Pierre Godard, Markus Mueller, Lucas Ondel, Shruti Palaskar, Philip Arthur, Francesco Ciannella, Mingxing Du, Elin Larsen, Danny Merkx, Rachid Riad, Liming Wang, Emmanuel Dupoux
We summarize the accomplishments of a multi-disciplinary workshop exploring the computational and scientific issues surrounding the discovery of linguistic units (subwords and words) in a language without orthography.