Search Results for author: Yohei Oseki

Found 16 papers, 7 papers with code

CMCL 2021 Shared Task on Eye-Tracking Prediction

no code implementations NAACL (CMCL) 2021 Nora Hollenstein, Emmanuele Chersoni, Cassandra L. Jacobs, Yohei Oseki, Laurent Prévot, Enrico Santus

The goal of the task is to predict 5 different token- level eye-tracking metrics of the Zurich Cognitive Language Processing Corpus (ZuCo).

Tree-Planted Transformers: Large Language Models with Implicit Syntactic Supervision

no code implementations20 Feb 2024 Ryo Yoshida, Taiga Someya, Yohei Oseki

Large Language Models (LLMs) have achieved remarkable success thanks to scalability on large text corpora, but have some drawback in training efficiency.

Continual Learning

Emergent Word Order Universals from Cognitively-Motivated Language Models

no code implementations19 Feb 2024 Tatsuki Kuribayashi, Ryo Ueda, Ryo Yoshida, Yohei Oseki, Ted Briscoe, Timothy Baldwin

This also showcases the advantage of cognitively-motivated LMs, which are typically employed in cognitive modeling, in the computational simulation of language universals.

Psychometric Predictive Power of Large Language Models

1 code implementation13 Nov 2023 Tatsuki Kuribayashi, Yohei Oseki, Timothy Baldwin

In other words, pure next-word probability remains a strong predictor for human reading behavior, even in the age of LLMs.

JCoLA: Japanese Corpus of Linguistic Acceptability

2 code implementations22 Sep 2023 Taiga Someya, Yushi Sugimoto, Yohei Oseki

In this paper, we introduce JCoLA (Japanese Corpus of Linguistic Acceptability), which consists of 10, 020 sentences annotated with binary acceptability judgments.

Linguistic Acceptability

Composition, Attention, or Both?

1 code implementation24 Oct 2022 Ryo Yoshida, Yohei Oseki

In this paper, we propose a novel architecture called Composition Attention Grammars (CAGs) that recursively compose subtrees into a single vector representation with a composition function, and selectively attend to previous structural information with a self-attention mechanism.

Context Limitations Make Neural Language Models More Human-Like

1 code implementation23 May 2022 Tatsuki Kuribayashi, Yohei Oseki, Ana Brassard, Kentaro Inui

Language models (LMs) have been used in cognitive modeling as well as engineering studies -- they compute information-theoretic complexity metrics that simulate humans' cognitive load during reading.

Modeling Human Sentence Processing with Left-Corner Recurrent Neural Network Grammars

2 code implementations EMNLP 2021 Ryo Yoshida, Hiroshi Noji, Yohei Oseki

In computational linguistics, it has been shown that hierarchical structures make language models (LMs) more human-like.

Sentence

Lower Perplexity is Not Always Human-Like

1 code implementation ACL 2021 Tatsuki Kuribayashi, Yohei Oseki, Takumi Ito, Ryo Yoshida, Masayuki Asahara, Kentaro Inui

Overall, our results suggest that a cross-lingual evaluation will be necessary to construct human-like computational models.

Language Modelling

Design of BCCWJ-EEG: Balanced Corpus with Human Electroencephalography

no code implementations LREC 2020 Yohei Oseki, Masayuki Asahara

Importantly, this inter-fertilization between NLP, on one hand, and the cognitive (neuro)science of language, on the other, has been driven by the language resources annotated with human language processing data.

EEG

Inverting and Modeling Morphological Inflection

no code implementations WS 2019 Yohei Oseki, Yasutada Sudo, Hiromu Sakai, Alec Marantz

Previous {``}wug{''} tests (Berko, 1958) on Japanese verbal inflection have demonstrated that Japanese speakers, both adults and children, cannot inflect novel present tense forms to {``}correct{''} past tense forms predicted by rules of existent verbs (de Chene, 1982; Vance, 1987, 1991; Klafehn, 2003, 2013), indicating that Japanese verbs are merely stored in the mental lexicon.

Morphological Inflection

Modeling Hierarchical Syntactic Structures in Morphological Processing

no code implementations WS 2019 Yohei Oseki, Charles Yang, Alec Marantz

Sentences are represented as hierarchical syntactic structures, which have been successfully modeled in sentence processing.

Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.