Search Results for author: Markus J. Hofmann

Found 4 papers, 0 papers with code

Language Models Explain Word Reading Times Better Than Empirical Predictability

no code implementations2 Feb 2022 Markus J. Hofmann, Steffen Remus, Chris Biemann, Ralph Radach, Lars Kuchinke

(3) In recurrent neural networks (RNNs), the subsymbolic units are trained to predict the next word, given all preceding words in the sentences.

Additive models Retrieval +4

Individual corpora predict fast memory retrieval during reading

no code implementations COLING (CogALex) 2020 Markus J. Hofmann, Lara Müller, Andre Rölke, Ralph Radach, Chris Biemann

Then we trained word2vec models from individual corpora and a 70 million-sentence newspaper corpus to obtain individual and norm-based long-term memory structure.

Language Modelling Retrieval +1

Decomposing predictability: Semantic feature overlap between words and the dynamics of reading for meaning

no code implementations6 Dec 2019 Markus J. Hofmann, Mareike A. Kleemann, Andre Roelke, Christian Vorstius, Ralph Radach

Direct associations between stimulus words were controlled, and semantic feature overlap between prime and target words was manipulated by their common associates.

Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.