Search Results for author: Lars Kuchinke

Found 1 papers, 0 papers with code

Language Models Explain Word Reading Times Better Than Empirical Predictability

no code implementations2 Feb 2022 Markus J. Hofmann, Steffen Remus, Chris Biemann, Ralph Radach, Lars Kuchinke

(3) In recurrent neural networks (RNNs), the subsymbolic units are trained to predict the next word, given all preceding words in the sentences.

Additive models Retrieval +4

Cannot find the paper you are looking for? You can Submit a new open access paper.