1 code implementation • 19 Jun 2023 • Maria Heitmeier, Yu-Ying Chuang, Seth D. Axen, R. Harald Baayen
So far, the mappings can either be obtained incrementally via error-driven learning, a computationally expensive process able to capture frequency effects, or in an efficient, but frequency-agnostic solution modelling the theoretical endstate of learning (EL) where all words are learned optimally.
no code implementations • 8 Sep 2022 • Wafaa Mohammed, Hassan Shahmohammadi, Hendrik P. A. Lensch, R. Harald Baayen
We obtained visually grounded vector representations for these languages and studied whether visual grounding on one or multiple languages improved the performance of embeddings on word similarity and categorization benchmarks.
no code implementations • 5 Jul 2022 • Elnaz Shafaei-Bajestan, Peter Uhrig, R. Harald Baayen
Our goal is to compare two models for the conceptualization of plurality.
1 code implementation • 1 Jul 2022 • Maria Heitmeier, Yu-Ying Chuang, R. Harald Baayen
This demonstrates the potential of the DLM to model behavioural data and leads to the conclusion that trial-to-trial learning can indeed be detected in unprimed lexical decision.
no code implementations • 29 Mar 2022 • Elnaz Shafaei-Bajestan, Masoumeh Moradipour-Tari, Peter Uhrig, R. Harald Baayen
In comparison with our approach, a method from compositional distributional semantics, called FRACSS, predicted plural vectors that were more similar to the corpus-extracted plural vectors in terms of direction but not vector length.
no code implementations • 8 Jul 2021 • Yu-Ying Chuang, Mihi Kang, Xuefeng Luo, R. Harald Baayen
This paper presents three case studies of modeling aspects of lexical processing with Linear Discriminative Learning (LDL), the computational engine of the Discriminative Lexicon model (Baayen et al., 2019).
no code implementations • 15 Jun 2021 • Maria Heitmeier, Yu-Ying Chuang, R. Harald Baayen
This study addresses a series of methodological questions that arise when modeling inflectional morphology with Linear Discriminative Learning.
1 code implementation • CoNLL (EMNLP) 2021 • Hassan Shahmohammadi, Hendrik P. A. Lensch, R. Harald Baayen
The general approach is to embed both textual and visual information into a common space -the grounded space-confined by an explicit relationship between both modalities.
no code implementations • 8 May 2020 • Manuel Traub, Martin V. Butz, R. Harald Baayen, Sebastian Otte
As a consequence, this limits in principle the inherent advantage of SNNs, that is, the potential to develop codes that rely on precise relative spike timings.
no code implementations • WS 2017 • Kristina Geeraert, R. Harald Baayen, John Newman
This study investigates the processing of idiomatic variants through an eye-tracking experiment.