Search Results for author: Tatsuki Kuribayashi

Found 24 papers, 15 papers with code

Attention is Not Only a Weight: Analyzing Transformers with Vector Norms

1 code implementation EMNLP 2020 Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui

Attention is a key component of Transformers, which have recently achieved considerable success in natural language processing.

Machine Translation Translation +1

Lower Perplexity is Not Always Human-Like

1 code implementation ACL 2021 Tatsuki Kuribayashi, Yohei Oseki, Takumi Ito, Ryo Yoshida, Masayuki Asahara, Kentaro Inui

Overall, our results suggest that a cross-lingual evaluation will be necessary to construct human-like computational models.

Language Modelling

Context Limitations Make Neural Language Models More Human-Like

1 code implementation23 May 2022 Tatsuki Kuribayashi, Yohei Oseki, Ana Brassard, Kentaro Inui

Language models (LMs) have been used in cognitive modeling as well as engineering studies -- they compute information-theoretic complexity metrics that simulate humans' cognitive load during reading.

Feasible Annotation Scheme for Capturing Policy Argument Reasoning using Argument Templates

1 code implementation WS 2018 Paul Reisert, Naoya Inoue, Tatsuki Kuribayashi, Kentaro Inui

Most of the existing works on argument mining cast the problem of argumentative structure identification as classification tasks (e. g. attack-support relations, stance, explicit premise/claim).

Argument Mining Document Summarization +2

An Empirical Study of Span Representations in Argumentation Structure Parsing

no code implementations ACL 2019 Tatsuki Kuribayashi, Hiroki Ouchi, Naoya Inoue, Paul Reisert, Toshinori Miyoshi, Jun Suzuki, Kentaro Inui

For several natural language processing (NLP) tasks, span representation design is attracting considerable attention as a promising new technique; a common basis for an effective design has been established.

Langsmith: An Interactive Academic Text Revision System

no code implementations EMNLP 2020 Takumi Ito, Tatsuki Kuribayashi, Masatoshi Hidaka, Jun Suzuki, Kentaro Inui

Despite the current diversity and inclusion initiatives in the academic community, researchers with a non-native command of English still face significant obstacles when writing papers in English.

Does Vision Accelerate Hierarchical Generalization of Neural Language Learners?

no code implementations1 Feb 2023 Tatsuki Kuribayashi

Neural language models (LMs) are arguably less data-efficient than humans -- why does this gap occur?

Language Acquisition

Second Language Acquisition of Neural Language Models

1 code implementation5 Jun 2023 Miyu Oba, Tatsuki Kuribayashi, Hiroki Ouchi, Taro Watanabe

With the success of neural language models (LMs), their language acquisition has gained much attention.

Cross-Lingual Transfer Language Acquisition

Assessing Step-by-Step Reasoning against Lexical Negation: A Case Study on Syllogism

1 code implementation23 Oct 2023 Mengyu Ye, Tatsuki Kuribayashi, Jun Suzuki, Goro Kobayashi, Hiroaki Funayama

Large language models (LLMs) take advantage of step-by-step reasoning instructions, e. g., chain-of-thought (CoT) prompting.

Logical Reasoning Negation

Psychometric Predictive Power of Large Language Models

1 code implementation13 Nov 2023 Tatsuki Kuribayashi, Yohei Oseki, Timothy Baldwin

In other words, pure next-word probability remains a strong predictor for human reading behavior, even in the age of LLMs.

Emergent Word Order Universals from Cognitively-Motivated Language Models

no code implementations19 Feb 2024 Tatsuki Kuribayashi, Ryo Ueda, Ryo Yoshida, Yohei Oseki, Ted Briscoe, Timothy Baldwin

This also showcases the advantage of cognitively-motivated LMs, which are typically employed in cognitive modeling, in the computational simulation of language universals.

To Drop or Not to Drop? Predicting Argument Ellipsis Judgments: A Case Study in Japanese

no code implementations17 Apr 2024 Yukiko Ishizuki, Tatsuki Kuribayashi, Yuichiroh Matsubayashi, Ryohei Sasano, Kentaro Inui

Speakers sometimes omit certain arguments of a predicate in a sentence; such omission is especially frequent in pro-drop languages.

Language Modelling Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.