Language users process utterances by segmenting them into many cognitive units, which vary in their sizes and linguistic levels.
We furthermore investigate whether vector quantisation, a technique for discrete representation learning, aids the model in the discovery and recognition of words.
In this paper we create visually grounded word embeddings by combining English text and images and compare them to popular text-based methods, to see if visual information allows our model to better capture cognitive aspects of word meaning.
This study addresses the question whether visually grounded speech recognition (VGS) models learn to capture sentence semantics without access to any prior linguistic knowledge.
To our knowledge, this is the first computational cognitive model that aims to simulate code-switched sentence production.
Backward saccades during reading have been hypothesized to be involved in structural reanalysis, or to be related to the level of text difficulty.
Opinionated Natural Language Generation (ONLG) is a new, challenging, task that aims to automatically generate human-like, subjective, responses to opinionated articles online.