Search Results for author: Tatsuki Kuribayashi

Found 23 papers, 13 papers with code

Emergent Word Order Universals from Cognitively-Motivated Language Models

no code implementations19 Feb 2024 Tatsuki Kuribayashi, Ryo Ueda, Ryo Yoshida, Yohei Oseki, Ted Briscoe, Timothy Baldwin

This also showcases the advantage of cognitively-motivated LMs, which are typically employed in cognitive modeling, in the computational simulation of language universals.

Psychometric Predictive Power of Large Language Models

no code implementations13 Nov 2023 Tatsuki Kuribayashi, Yohei Oseki, Timothy Baldwin

Next-word probabilities from language models have been shown to successfully simulate human reading behavior.

Assessing Step-by-Step Reasoning against Lexical Negation: A Case Study on Syllogism

1 code implementation23 Oct 2023 Mengyu Ye, Tatsuki Kuribayashi, Jun Suzuki, Goro Kobayashi, Hiroaki Funayama

Large language models (LLMs) take advantage of step-by-step reasoning instructions, e. g., chain-of-thought (CoT) prompting.

Logical Reasoning Negation

Second Language Acquisition of Neural Language Models

1 code implementation5 Jun 2023 Miyu Oba, Tatsuki Kuribayashi, Hiroki Ouchi, Taro Watanabe

With the success of neural language models (LMs), their language acquisition has gained much attention.

Cross-Lingual Transfer Language Acquisition

Does Vision Accelerate Hierarchical Generalization of Neural Language Learners?

no code implementations1 Feb 2023 Tatsuki Kuribayashi

Neural language models (LMs) are arguably less data-efficient than humans -- why does this gap occur?

Language Acquisition

Analyzing Feed-Forward Blocks in Transformers through the Lens of Attention Map

no code implementations1 Feb 2023 Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui

Given that Transformers are ubiquitous in wide tasks, interpreting their internals is a pivotal issue.

Context Limitations Make Neural Language Models More Human-Like

1 code implementation23 May 2022 Tatsuki Kuribayashi, Yohei Oseki, Ana Brassard, Kentaro Inui

Language models (LMs) have been used in cognitive modeling as well as engineering studies -- they compute information-theoretic complexity metrics that simulate humans' cognitive load during reading.

Lower Perplexity is Not Always Human-Like

1 code implementation ACL 2021 Tatsuki Kuribayashi, Yohei Oseki, Takumi Ito, Ryo Yoshida, Masayuki Asahara, Kentaro Inui

Overall, our results suggest that a cross-lingual evaluation will be necessary to construct human-like computational models.

Language Modelling

Langsmith: An Interactive Academic Text Revision System

no code implementations EMNLP 2020 Takumi Ito, Tatsuki Kuribayashi, Masatoshi Hidaka, Jun Suzuki, Kentaro Inui

Despite the current diversity and inclusion initiatives in the academic community, researchers with a non-native command of English still face significant obstacles when writing papers in English.

Attention is Not Only a Weight: Analyzing Transformers with Vector Norms

1 code implementation EMNLP 2020 Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui

Attention is a key component of Transformers, which have recently achieved considerable success in natural language processing.

Machine Translation Translation +1

An Empirical Study of Span Representations in Argumentation Structure Parsing

no code implementations ACL 2019 Tatsuki Kuribayashi, Hiroki Ouchi, Naoya Inoue, Paul Reisert, Toshinori Miyoshi, Jun Suzuki, Kentaro Inui

For several natural language processing (NLP) tasks, span representation design is attracting considerable attention as a promising new technique; a common basis for an effective design has been established.

Feasible Annotation Scheme for Capturing Policy Argument Reasoning using Argument Templates

1 code implementation WS 2018 Paul Reisert, Naoya Inoue, Tatsuki Kuribayashi, Kentaro Inui

Most of the existing works on argument mining cast the problem of argumentative structure identification as classification tasks (e. g. attack-support relations, stance, explicit premise/claim).

Argument Mining Document Summarization +2

Cannot find the paper you are looking for? You can Submit a new open access paper.