Search Results for author: Tatsuki Kuribayashi

Found 32 papers, 18 papers with code

Syntactic Learnability of Echo State Neural Language Models at Scale

no code implementations3 Mar 2025 Ryo Ueda, Tatsuki Kuribayashi, Shunsuke Kando, Kentaro Inui

What is a neural model with minimum architectural complexity that exhibits reasonable language learning capability?

Language Modeling Language Modelling

Can LLMs Simulate L2-English Dialogue? An Information-Theoretic Analysis of L1-Dependent Biases

1 code implementation20 Feb 2025 Rena Gao, Xuetong Wu, Tatsuki Kuribayashi, Mingrui Ye, Siya Qi, Carsten Roever, Yuanxing Liu, Zheng Yuan, Jey Han Lau

This study evaluates Large Language Models' (LLMs) ability to simulate non-native-like English use observed in human second language (L2) learners interfered with by their native first language (L1).

Dialogue Generation

On Representational Dissociation of Language and Arithmetic in Large Language Models

no code implementations17 Feb 2025 Riku Kisako, Tatsuki Kuribayashi, Ryohei Sasano

The association between language and (non-linguistic) thinking ability in humans has long been debated, and recently, neuroscientific evidence of brain activity patterns has been considered.

Arithmetic Reasoning

Can Input Attributions Interpret the Inductive Reasoning Process Elicited in In-Context Learning?

no code implementations20 Dec 2024 Mengyu Ye, Tatsuki Kuribayashi, Goro Kobayashi, Jun Suzuki

Elucidating the rationale behind neural models' outputs has been challenging in the machine learning field, which is indeed applicable in this age of large language models (LLMs) and in-context learning (ICL).

Diagnostic In-Context Learning

Think-to-Talk or Talk-to-Think? When LLMs Come Up with an Answer in Multi-Step Arithmetic Reasoning

no code implementations2 Dec 2024 Keito Kudo, Yoichi Aoki, Tatsuki Kuribayashi, Shusaku Sone, Masaya Taniguchi, Ana Brassard, Keisuke Sakaguchi, Kentaro Inui

This study investigates the internal reasoning process of language models during arithmetic multi-step reasoning, motivated by the question of when they internally form their answers during reasoning.

Arithmetic Reasoning

First Heuristic Then Rational: Dynamic Use of Heuristics in Language Model Reasoning

1 code implementation23 Jun 2024 Yoichi Aoki, Keito Kudo, Tatsuki Kuribayashi, Shusaku Sone, Masaya Taniguchi, Keisuke Sakaguchi, Kentaro Inui

Multi-step reasoning instruction, such as chain-of-thought prompting, is widely adopted to explore better language models (LMs) performance.

Language Modeling Language Modelling

CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark

no code implementations10 Jun 2024 David Romero, Chenyang Lyu, Haryo Akbarianto Wibowo, Teresa Lynn, Injy Hamed, Aditya Nanda Kishore, Aishik Mandal, Alina Dragonetti, Artem Abzaliev, Atnafu Lambebo Tonja, Bontu Fufa Balcha, Chenxi Whitehouse, Christian Salamea, Dan John Velasco, David Ifeoluwa Adelani, David Le Meur, Emilio Villa-Cueva, Fajri Koto, Fauzan Farooqui, Frederico Belcavello, Ganzorig Batnasan, Gisela Vallejo, Grainne Caulfield, Guido Ivetta, Haiyue Song, Henok Biadglign Ademtew, Hernán Maina, Holy Lovenia, Israel Abebe Azime, Jan Christian Blaise Cruz, Jay Gala, Jiahui Geng, Jesus-German Ortiz-Barajas, Jinheon Baek, Jocelyn Dunstan, Laura Alonso Alemany, Kumaranage Ravindu Yasas Nagasinghe, Luciana Benotti, Luis Fernando D'Haro, Marcelo Viridiano, Marcos Estecha-Garitagoitia, Maria Camila Buitrago Cabrera, Mario Rodríguez-Cantelar, Mélanie Jouitteau, Mihail Mihaylov, Mohamed Fazli Mohamed Imam, Muhammad Farid Adilazuarda, Munkhjargal Gochoo, Munkh-Erdene Otgonbold, Naome Etori, Olivier Niyomugisha, Paula Mónica Silva, Pranjal Chitale, Raj Dabre, Rendi Chevi, Ruochen Zhang, Ryandito Diandaru, Samuel Cahyawijaya, Santiago Góngora, Soyeong Jeong, Sukannya Purkayastha, Tatsuki Kuribayashi, Teresa Clifford, Thanmay Jayakumar, Tiago Timponi Torrent, Toqeer Ehsan, Vladimir Araujo, Yova Kementchedjhieva, Zara Burzo, Zheng Wei Lim, Zheng Xin Yong, Oana Ignat, Joan Nwatu, Rada Mihalcea, Thamar Solorio, Alham Fikri Aji

Visual Question Answering (VQA) is an important task in multimodal AI, and it is often used to test the ability of vision-language models to understand and reason on knowledge present in both visual and textual data.

Diversity Question Answering +1

Emergent Word Order Universals from Cognitively-Motivated Language Models

1 code implementation19 Feb 2024 Tatsuki Kuribayashi, Ryo Ueda, Ryo Yoshida, Yohei Oseki, Ted Briscoe, Timothy Baldwin

The world's languages exhibit certain so-called typological or implicational universals; for example, Subject-Object-Verb (SOV) languages typically use postpositions.

Psychometric Predictive Power of Large Language Models

1 code implementation13 Nov 2023 Tatsuki Kuribayashi, Yohei Oseki, Timothy Baldwin

In other words, pure next-word probability remains a strong predictor for human reading behavior, even in the age of LLMs.

Assessing Step-by-Step Reasoning against Lexical Negation: A Case Study on Syllogism

no code implementations23 Oct 2023 Mengyu Ye, Tatsuki Kuribayashi, Jun Suzuki, Goro Kobayashi, Hiroaki Funayama

Large language models (LLMs) take advantage of step-by-step reasoning instructions, e. g., chain-of-thought (CoT) prompting.

Logical Reasoning Negation

Second Language Acquisition of Neural Language Models

1 code implementation5 Jun 2023 Miyu Oba, Tatsuki Kuribayashi, Hiroki Ouchi, Taro Watanabe

With the success of neural language models (LMs), their language acquisition has gained much attention.

Cross-Lingual Transfer Language Acquisition

Does Vision Accelerate Hierarchical Generalization in Neural Language Learners?

no code implementations1 Feb 2023 Tatsuki Kuribayashi, Timothy Baldwin

This study explores the advantage of grounded language acquisition, specifically the impact of visual information -- which humans can usually rely on but LMs largely do not have access to during language acquisition -- on syntactic generalization in LMs.

cross-modal alignment Language Acquisition +1

Context Limitations Make Neural Language Models More Human-Like

1 code implementation23 May 2022 Tatsuki Kuribayashi, Yohei Oseki, Ana Brassard, Kentaro Inui

Language models (LMs) have been used in cognitive modeling as well as engineering studies -- they compute information-theoretic complexity metrics that simulate humans' cognitive load during reading.

Lower Perplexity is Not Always Human-Like

1 code implementation ACL 2021 Tatsuki Kuribayashi, Yohei Oseki, Takumi Ito, Ryo Yoshida, Masayuki Asahara, Kentaro Inui

Overall, our results suggest that a cross-lingual evaluation will be necessary to construct human-like computational models.

Language Modeling Language Modelling

Langsmith: An Interactive Academic Text Revision System

no code implementations EMNLP 2020 Takumi Ito, Tatsuki Kuribayashi, Masatoshi Hidaka, Jun Suzuki, Kentaro Inui

Despite the current diversity and inclusion initiatives in the academic community, researchers with a non-native command of English still face significant obstacles when writing papers in English.

Diversity

Attention is Not Only a Weight: Analyzing Transformers with Vector Norms

1 code implementation EMNLP 2020 Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui

Attention is a key component of Transformers, which have recently achieved considerable success in natural language processing.

Machine Translation Translation +1

An Empirical Study of Span Representations in Argumentation Structure Parsing

no code implementations ACL 2019 Tatsuki Kuribayashi, Hiroki Ouchi, Naoya Inoue, Paul Reisert, Toshinori Miyoshi, Jun Suzuki, Kentaro Inui

For several natural language processing (NLP) tasks, span representation design is attracting considerable attention as a promising new technique; a common basis for an effective design has been established.

Feasible Annotation Scheme for Capturing Policy Argument Reasoning using Argument Templates

1 code implementation WS 2018 Paul Reisert, Naoya Inoue, Tatsuki Kuribayashi, Kentaro Inui

Most of the existing works on argument mining cast the problem of argumentative structure identification as classification tasks (e. g. attack-support relations, stance, explicit premise/claim).

Argument Mining Document Summarization +2

Cannot find the paper you are looking for? You can Submit a new open access paper.