no code implementations • COLING 2022 • Riki Fujihara, Tatsuki Kuribayashi, Kaori Abe, Ryoko Tokuhisa, Kentaro Inui
Humans use different wordings depending on the context to facilitate efficient communication.
no code implementations • 19 Feb 2024 • Tatsuki Kuribayashi, Ryo Ueda, Ryo Yoshida, Yohei Oseki, Ted Briscoe, Timothy Baldwin
This also showcases the advantage of cognitively-motivated LMs, which are typically employed in cognitive modeling, in the computational simulation of language universals.
no code implementations • 13 Nov 2023 • Tatsuki Kuribayashi, Yohei Oseki, Timothy Baldwin
Next-word probabilities from language models have been shown to successfully simulate human reading behavior.
1 code implementation • 23 Oct 2023 • Mengyu Ye, Tatsuki Kuribayashi, Jun Suzuki, Goro Kobayashi, Hiroaki Funayama
Large language models (LLMs) take advantage of step-by-step reasoning instructions, e. g., chain-of-thought (CoT) prompting.
1 code implementation • 5 Jun 2023 • Miyu Oba, Tatsuki Kuribayashi, Hiroki Ouchi, Taro Watanabe
With the success of neural language models (LMs), their language acquisition has gained much attention.
no code implementations • 29 May 2023 • Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui
Prediction head is a crucial component of Transformer language models.
1 code implementation • 16 Feb 2023 • Yoichi Aoki, Keito Kudo, Tatsuki Kuribayashi, Ana Brassard, Masashi Yoshikawa, Keisuke Sakaguchi, Kentaro Inui
Neural reasoning accuracy improves when generating intermediate reasoning steps.
1 code implementation • 15 Feb 2023 • Keito Kudo, Yoichi Aoki, Tatsuki Kuribayashi, Ana Brassard, Masashi Yoshikawa, Keisuke Sakaguchi, Kentaro Inui
Compositionality is a pivotal property of symbolic reasoning.
no code implementations • 1 Feb 2023 • Tatsuki Kuribayashi
Neural language models (LMs) are arguably less data-efficient than humans -- why does this gap occur?
no code implementations • 1 Feb 2023 • Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui
Given that Transformers are ubiquitous in wide tasks, interpreting their internals is a pivotal issue.
1 code implementation • 23 May 2022 • Tatsuki Kuribayashi, Yohei Oseki, Ana Brassard, Kentaro Inui
Language models (LMs) have been used in cognitive modeling as well as engineering studies -- they compute information-theoretic complexity metrics that simulate humans' cognitive load during reading.
no code implementations • 28 Sep 2021 • Hiroki Ouchi, Jun Suzuki, Sosuke Kobayashi, Sho Yokoi, Tatsuki Kuribayashi, Masashi Yoshikawa, Kentaro Inui
Interpretable rationales for model predictions are crucial in practical applications.
2 code implementations • EMNLP 2021 • Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui
Transformer architecture has become ubiquitous in the natural language processing field.
1 code implementation • ACL 2021 • Tatsuki Kuribayashi, Yohei Oseki, Takumi Ito, Ryo Yoshida, Masayuki Asahara, Kentaro Inui
Overall, our results suggest that a cross-lingual evaluation will be necessary to construct human-like computational models.
no code implementations • COLING 2020 • Takaki Otake, Sho Yokoi, Naoya Inoue, Ryo Takahashi, Tatsuki Kuribayashi, Kentaro Inui
Events in a narrative differ in salience: some are more important to the story than others.
no code implementations • EMNLP 2020 • Takumi Ito, Tatsuki Kuribayashi, Masatoshi Hidaka, Jun Suzuki, Kentaro Inui
Despite the current diversity and inclusion initiatives in the academic community, researchers with a non-native command of English still face significant obstacles when writing papers in English.
1 code implementation • ACL 2020 • Tatsuki Kuribayashi, Takumi Ito, Jun Suzuki, Kentaro Inui
We examine a methodology using neural language models (LMs) for analyzing the word order of language.
1 code implementation • ACL 2020 • Hiroki Ouchi, Jun Suzuki, Sosuke Kobayashi, Sho Yokoi, Tatsuki Kuribayashi, Ryuto Konno, Kentaro Inui
Interpretable rationales for model predictions play a critical role in practical applications.
1 code implementation • EMNLP 2020 • Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui
Attention is a key component of Transformers, which have recently achieved considerable success in natural language processing.
1 code implementation • WS 2019 • Takumi Ito, Tatsuki Kuribayashi, Hayato Kobayashi, Ana Brassard, Masato Hagiwara, Jun Suzuki, Kentaro Inui
The writing process consists of several stages such as drafting, revising, editing, and proofreading.
1 code implementation • IJCNLP 2019 • Masato Hagiwara, Takumi Ito, Tatsuki Kuribayashi, Jun Suzuki, Kentaro Inui
Language technologies play a key role in assisting people with their writing.
no code implementations • ACL 2019 • Tatsuki Kuribayashi, Hiroki Ouchi, Naoya Inoue, Paul Reisert, Toshinori Miyoshi, Jun Suzuki, Kentaro Inui
For several natural language processing (NLP) tasks, span representation design is attracting considerable attention as a promising new technique; a common basis for an effective design has been established.
1 code implementation • WS 2018 • Paul Reisert, Naoya Inoue, Tatsuki Kuribayashi, Kentaro Inui
Most of the existing works on argument mining cast the problem of argumentative structure identification as classification tasks (e. g. attack-support relations, stance, explicit premise/claim).