no code implementations • COLING 2022 • Riki Fujihara, Tatsuki Kuribayashi, Kaori Abe, Ryoko Tokuhisa, Kentaro Inui
Humans use different wordings depending on the context to facilitate efficient communication.
no code implementations • 3 Mar 2025 • Ryo Ueda, Tatsuki Kuribayashi, Shunsuke Kando, Kentaro Inui
What is a neural model with minimum architectural complexity that exhibits reasonable language learning capability?
1 code implementation • 20 Feb 2025 • Rena Gao, Xuetong Wu, Tatsuki Kuribayashi, Mingrui Ye, Siya Qi, Carsten Roever, Yuanxing Liu, Zheng Yuan, Jey Han Lau
This study evaluates Large Language Models' (LLMs) ability to simulate non-native-like English use observed in human second language (L2) learners interfered with by their native first language (L1).
no code implementations • 17 Feb 2025 • Riku Kisako, Tatsuki Kuribayashi, Ryohei Sasano
The association between language and (non-linguistic) thinking ability in humans has long been debated, and recently, neuroscientific evidence of brain activity patterns has been considered.
no code implementations • 24 Dec 2024 • Haonan Li, Xudong Han, Zenan Zhai, Honglin Mu, Hao Wang, Zhenxuan Zhang, Yilin Geng, Shom Lin, Renxi Wang, Artem Shelmanov, Xiangyu Qi, Yuxia Wang, Donghai Hong, Youliang Yuan, Meng Chen, Haoqin Tu, Fajri Koto, Tatsuki Kuribayashi, Cong Zeng, Rishabh Bhardwaj, Bingchen Zhao, Yawen Duan, Yi Liu, Emad A. Alghamdi, Yaodong Yang, Yinpeng Dong, Soujanya Poria, PengFei Liu, Zhengzhong Liu, Xuguang Ren, Eduard Hovy, Iryna Gurevych, Preslav Nakov, Monojit Choudhury, Timothy Baldwin
To address this gap, we introduce Libra-Leaderboard, a comprehensive framework designed to rank LLMs through a balanced evaluation of performance and safety.
no code implementations • 20 Dec 2024 • Mengyu Ye, Tatsuki Kuribayashi, Goro Kobayashi, Jun Suzuki
Elucidating the rationale behind neural models' outputs has been challenging in the machine learning field, which is indeed applicable in this age of large language models (LLMs) and in-context learning (ICL).
no code implementations • 2 Dec 2024 • Keito Kudo, Yoichi Aoki, Tatsuki Kuribayashi, Shusaku Sone, Masaya Taniguchi, Ana Brassard, Keisuke Sakaguchi, Kentaro Inui
This study investigates the internal reasoning process of language models during arithmetic multi-step reasoning, motivated by the question of when they internally form their answers during reasoning.
1 code implementation • 23 Jun 2024 • Yoichi Aoki, Keito Kudo, Tatsuki Kuribayashi, Shusaku Sone, Masaya Taniguchi, Keisuke Sakaguchi, Kentaro Inui
Multi-step reasoning instruction, such as chain-of-thought prompting, is widely adopted to explore better language models (LMs) performance.
no code implementations • 10 Jun 2024 • David Romero, Chenyang Lyu, Haryo Akbarianto Wibowo, Teresa Lynn, Injy Hamed, Aditya Nanda Kishore, Aishik Mandal, Alina Dragonetti, Artem Abzaliev, Atnafu Lambebo Tonja, Bontu Fufa Balcha, Chenxi Whitehouse, Christian Salamea, Dan John Velasco, David Ifeoluwa Adelani, David Le Meur, Emilio Villa-Cueva, Fajri Koto, Fauzan Farooqui, Frederico Belcavello, Ganzorig Batnasan, Gisela Vallejo, Grainne Caulfield, Guido Ivetta, Haiyue Song, Henok Biadglign Ademtew, Hernán Maina, Holy Lovenia, Israel Abebe Azime, Jan Christian Blaise Cruz, Jay Gala, Jiahui Geng, Jesus-German Ortiz-Barajas, Jinheon Baek, Jocelyn Dunstan, Laura Alonso Alemany, Kumaranage Ravindu Yasas Nagasinghe, Luciana Benotti, Luis Fernando D'Haro, Marcelo Viridiano, Marcos Estecha-Garitagoitia, Maria Camila Buitrago Cabrera, Mario Rodríguez-Cantelar, Mélanie Jouitteau, Mihail Mihaylov, Mohamed Fazli Mohamed Imam, Muhammad Farid Adilazuarda, Munkhjargal Gochoo, Munkh-Erdene Otgonbold, Naome Etori, Olivier Niyomugisha, Paula Mónica Silva, Pranjal Chitale, Raj Dabre, Rendi Chevi, Ruochen Zhang, Ryandito Diandaru, Samuel Cahyawijaya, Santiago Góngora, Soyeong Jeong, Sukannya Purkayastha, Tatsuki Kuribayashi, Teresa Clifford, Thanmay Jayakumar, Tiago Timponi Torrent, Toqeer Ehsan, Vladimir Araujo, Yova Kementchedjhieva, Zara Burzo, Zheng Wei Lim, Zheng Xin Yong, Oana Ignat, Joan Nwatu, Rada Mihalcea, Thamar Solorio, Alham Fikri Aji
Visual Question Answering (VQA) is an important task in multimodal AI, and it is often used to test the ability of vision-language models to understand and reason on knowledge present in both visual and textual data.
1 code implementation • 17 Apr 2024 • Yukiko Ishizuki, Tatsuki Kuribayashi, Yuichiroh Matsubayashi, Ryohei Sasano, Kentaro Inui
Speakers sometimes omit certain arguments of a predicate in a sentence; such omission is especially frequent in pro-drop languages.
1 code implementation • 19 Feb 2024 • Tatsuki Kuribayashi, Ryo Ueda, Ryo Yoshida, Yohei Oseki, Ted Briscoe, Timothy Baldwin
The world's languages exhibit certain so-called typological or implicational universals; for example, Subject-Object-Verb (SOV) languages typically use postpositions.
1 code implementation • 13 Nov 2023 • Tatsuki Kuribayashi, Yohei Oseki, Timothy Baldwin
In other words, pure next-word probability remains a strong predictor for human reading behavior, even in the age of LLMs.
no code implementations • 23 Oct 2023 • Mengyu Ye, Tatsuki Kuribayashi, Jun Suzuki, Goro Kobayashi, Hiroaki Funayama
Large language models (LLMs) take advantage of step-by-step reasoning instructions, e. g., chain-of-thought (CoT) prompting.
1 code implementation • 5 Jun 2023 • Miyu Oba, Tatsuki Kuribayashi, Hiroki Ouchi, Taro Watanabe
With the success of neural language models (LMs), their language acquisition has gained much attention.
no code implementations • 29 May 2023 • Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui
Prediction head is a crucial component of Transformer language models.
1 code implementation • 16 Feb 2023 • Yoichi Aoki, Keito Kudo, Tatsuki Kuribayashi, Ana Brassard, Masashi Yoshikawa, Keisuke Sakaguchi, Kentaro Inui
Neural reasoning accuracy improves when generating intermediate reasoning steps.
1 code implementation • 15 Feb 2023 • Keito Kudo, Yoichi Aoki, Tatsuki Kuribayashi, Ana Brassard, Masashi Yoshikawa, Keisuke Sakaguchi, Kentaro Inui
Compositionality is a pivotal property of symbolic reasoning.
1 code implementation • 1 Feb 2023 • Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui
Transformers are ubiquitous in wide tasks.
no code implementations • 1 Feb 2023 • Tatsuki Kuribayashi, Timothy Baldwin
This study explores the advantage of grounded language acquisition, specifically the impact of visual information -- which humans can usually rely on but LMs largely do not have access to during language acquisition -- on syntactic generalization in LMs.
1 code implementation • 23 May 2022 • Tatsuki Kuribayashi, Yohei Oseki, Ana Brassard, Kentaro Inui
Language models (LMs) have been used in cognitive modeling as well as engineering studies -- they compute information-theoretic complexity metrics that simulate humans' cognitive load during reading.
no code implementations • 28 Sep 2021 • Hiroki Ouchi, Jun Suzuki, Sosuke Kobayashi, Sho Yokoi, Tatsuki Kuribayashi, Masashi Yoshikawa, Kentaro Inui
Interpretable rationales for model predictions are crucial in practical applications.
2 code implementations • EMNLP 2021 • Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui
Transformer architecture has become ubiquitous in the natural language processing field.
1 code implementation • ACL 2021 • Tatsuki Kuribayashi, Yohei Oseki, Takumi Ito, Ryo Yoshida, Masayuki Asahara, Kentaro Inui
Overall, our results suggest that a cross-lingual evaluation will be necessary to construct human-like computational models.
no code implementations • COLING 2020 • Takaki Otake, Sho Yokoi, Naoya Inoue, Ryo Takahashi, Tatsuki Kuribayashi, Kentaro Inui
Events in a narrative differ in salience: some are more important to the story than others.
no code implementations • EMNLP 2020 • Takumi Ito, Tatsuki Kuribayashi, Masatoshi Hidaka, Jun Suzuki, Kentaro Inui
Despite the current diversity and inclusion initiatives in the academic community, researchers with a non-native command of English still face significant obstacles when writing papers in English.
1 code implementation • ACL 2020 • Tatsuki Kuribayashi, Takumi Ito, Jun Suzuki, Kentaro Inui
We examine a methodology using neural language models (LMs) for analyzing the word order of language.
1 code implementation • ACL 2020 • Hiroki Ouchi, Jun Suzuki, Sosuke Kobayashi, Sho Yokoi, Tatsuki Kuribayashi, Ryuto Konno, Kentaro Inui
Interpretable rationales for model predictions play a critical role in practical applications.
1 code implementation • EMNLP 2020 • Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui
Attention is a key component of Transformers, which have recently achieved considerable success in natural language processing.
1 code implementation • WS 2019 • Takumi Ito, Tatsuki Kuribayashi, Hayato Kobayashi, Ana Brassard, Masato Hagiwara, Jun Suzuki, Kentaro Inui
The writing process consists of several stages such as drafting, revising, editing, and proofreading.
1 code implementation • IJCNLP 2019 • Masato Hagiwara, Takumi Ito, Tatsuki Kuribayashi, Jun Suzuki, Kentaro Inui
Language technologies play a key role in assisting people with their writing.
no code implementations • ACL 2019 • Tatsuki Kuribayashi, Hiroki Ouchi, Naoya Inoue, Paul Reisert, Toshinori Miyoshi, Jun Suzuki, Kentaro Inui
For several natural language processing (NLP) tasks, span representation design is attracting considerable attention as a promising new technique; a common basis for an effective design has been established.
1 code implementation • WS 2018 • Paul Reisert, Naoya Inoue, Tatsuki Kuribayashi, Kentaro Inui
Most of the existing works on argument mining cast the problem of argumentative structure identification as classification tasks (e. g. attack-support relations, stance, explicit premise/claim).