1 code implementation • 19 Jun 2023 • Maria Heitmeier, Yu-Ying Chuang, Seth D. Axen, R. Harald Baayen
So far, the mappings can either be obtained incrementally via error-driven learning, a computationally expensive process able to capture frequency effects, or in an efficient, but frequency-agnostic solution modelling the theoretical endstate of learning (EL) where all words are learned optimally.
1 code implementation • 1 Jul 2022 • Maria Heitmeier, Yu-Ying Chuang, R. Harald Baayen
This demonstrates the potential of the DLM to model behavioural data and leads to the conclusion that trial-to-trial learning can indeed be detected in unprimed lexical decision.
no code implementations • 30 Jun 2022 • Hassan Shahmohammadi, Maria Heitmeier, Elnaz Shafaei-Bajestan, Hendrik P. A. Lensch, Harald Baayen
To what extent does this setup rely on visual information from images?
1 code implementation • 17 Jun 2022 • Hassan Shahmohammadi, Maria Heitmeier, Elnaz Shafaei-Bajestan, Hendrik P. A. Lensch, Harald Baayen
Our model effectively balances the interplay between language and vision by aligning textual embeddings with visual information while simultaneously preserving the distributional statistics that characterize word usage in text corpora.
no code implementations • 15 Jun 2021 • Maria Heitmeier, Yu-Ying Chuang, R. Harald Baayen
This study addresses a series of methodological questions that arise when modeling inflectional morphology with Linear Discriminative Learning.