Search Results for author: R. Harald Baayen

Found 11 papers, 3 papers with code

Word-specific tonal realizations in Mandarin

no code implementations11 May 2024 Yu-Ying Chuang, Melanie J. Bell, Yu-Hsiang Tseng, R. Harald Baayen

We then proceed to show, using computational modeling with context-specific word embeddings, that token-specific pitch contours predict word type with 50% accuracy on held-out data, and that context-sensitive, token-specific embeddings can predict the shape of pitch contours with 30% accuracy.

Word Embeddings

Frequency effects in Linear Discriminative Learning

1 code implementation19 Jun 2023 Maria Heitmeier, Yu-Ying Chuang, Seth D. Axen, R. Harald Baayen

So far, the mappings can either be obtained incrementally via error-driven learning, a computationally expensive process able to capture frequency effects, or in an efficient, but frequency-agnostic solution modelling the theoretical endstate of learning (EL) where all words are learned optimally.

Incremental Learning

Visual Grounding of Inter-lingual Word-Embeddings

no code implementations8 Sep 2022 Wafaa Mohammed, Hassan Shahmohammadi, Hendrik P. A. Lensch, R. Harald Baayen

We obtained visually grounded vector representations for these languages and studied whether visual grounding on one or multiple languages improved the performance of embeddings on word similarity and categorization benchmarks.

Visual Grounding Word Embeddings +1

Making sense of spoken plurals

no code implementations5 Jul 2022 Elnaz Shafaei-Bajestan, Peter Uhrig, R. Harald Baayen

Our goal is to compare two models for the conceptualization of plurality.

How trial-to-trial learning shapes mappings in the mental lexicon: Modelling Lexical Decision with Linear Discriminative Learning

1 code implementation1 Jul 2022 Maria Heitmeier, Yu-Ying Chuang, R. Harald Baayen

This demonstrates the potential of the DLM to model behavioural data and leads to the conclusion that trial-to-trial learning can indeed be detected in unprimed lexical decision.

Additive models Incremental Learning

Semantic properties of English nominal pluralization: Insights from word embeddings

no code implementations29 Mar 2022 Elnaz Shafaei-Bajestan, Masoumeh Moradipour-Tari, Peter Uhrig, R. Harald Baayen

In comparison with our approach, a method from compositional distributional semantics, called FRACSS, predicted plural vectors that were more similar to the corpus-extracted plural vectors in terms of direction but not vector length.

Word Embeddings

Vector Space Morphology with Linear Discriminative Learning

no code implementations8 Jul 2021 Yu-Ying Chuang, Mihi Kang, Xuefeng Luo, R. Harald Baayen

This paper presents three case studies of modeling aspects of lexical processing with Linear Discriminative Learning (LDL), the computational engine of the Discriminative Lexicon model (Baayen et al., 2019).

Modeling morphology with Linear Discriminative Learning: considerations and design choices

no code implementations15 Jun 2021 Maria Heitmeier, Yu-Ying Chuang, R. Harald Baayen

This study addresses a series of methodological questions that arise when modeling inflectional morphology with Linear Discriminative Learning.

Incremental Learning

Learning Zero-Shot Multifaceted Visually Grounded Word Embeddings via Multi-Task Training

1 code implementation CoNLL (EMNLP) 2021 Hassan Shahmohammadi, Hendrik P. A. Lensch, R. Harald Baayen

The general approach is to embed both textual and visual information into a common space -the grounded space-confined by an explicit relationship between both modalities.

Multi-Task Learning Word Embeddings

Learning Precise Spike Timings with Eligibility Traces

no code implementations8 May 2020 Manuel Traub, Martin V. Butz, R. Harald Baayen, Sebastian Otte

As a consequence, this limits in principle the inherent advantage of SNNs, that is, the potential to develop codes that rely on precise relative spike timings.

Understanding Idiomatic Variation

no code implementations WS 2017 Kristina Geeraert, R. Harald Baayen, John Newman

This study investigates the processing of idiomatic variants through an eye-tracking experiment.

Semantic Textual Similarity

Cannot find the paper you are looking for? You can Submit a new open access paper.