1 code implementation • EMNLP (BlackboxNLP) 2021 • Hitomi Yanaka, Koji Mineshima
Despite the success of multilingual pre-trained language models, it remains unclear to what extent these models have human-like generalization capacity across languages.
Natural Language Inference Out-of-Distribution Generalization
no code implementations • ISA (LREC) 2022 • Kana Koyano, Hitomi Yanaka, Koji Mineshima, Daisuke Bekki
We also construct an inference test set for numerical expressions based on this annotated corpus.
1 code implementation • 3 Apr 2024 • Takeshi Kojima, Itsuki Okimura, Yusuke Iwasawa, Hitomi Yanaka, Yutaka Matsuo
Additionally, we tamper with less than 1% of the total neurons in each model during inference and demonstrate that tampering with a few language-specific neurons drastically changes the probability of target language occurrence in text generation.
1 code implementation • 27 Jun 2023 • Ryo Sekizawa, Nan Duan, Shuai Lu, Hitomi Yanaka
Code search is a task to find programming codes that semantically match the given natural language queries.
1 code implementation • 19 Jun 2023 • Tomoki Sugimoto, Yasumasa Onoe, Hitomi Yanaka
Natural Language Inference (NLI) tasks involving temporal inference remain challenging for pre-trained language models (LMs).
no code implementations • 5 Jun 2023 • Ryo Sekizawa, Hitomi Yanaka
Using Japanese honorifics is challenging because it requires not only knowledge of the grammatical rules but also contextual information, such as social relationships.
1 code implementation • 4 Jun 2023 • Tomoya Kurosawa, Hitomi Yanaka
In the experiments, we compare F1-scores by shuffling the order and randomizing character sequences after testing the performance of character-level information.
no code implementations • 28 Feb 2023 • Daisuke Bekki, Hitomi Yanaka
The Japanese CCGBank serves as training and evaluation data for developing Japanese CCG parsers.
1 code implementation • 9 Aug 2022 • Hitomi Yanaka, Koji Mineshima
We also present a stress-test dataset for compositional inference, created by transforming syntactic structures of sentences in JSICK to investigate whether language models are sensitive to word order and case particles.
1 code implementation • ACL 2022 • Tomoki Sugimoto, Hitomi Yanaka
We evaluate our system by experimenting with Japanese NLI datasets that involve temporal order.
1 code implementation • ACL 2022 • Tomoya Kurosawa, Hitomi Yanaka
Recently, the Natural Language Inference (NLI) task has been studied for semi-structured tables that do not have a strict format.
1 code implementation • ACL (mmsr, IWCS) 2021 • Riko Suzuki, Hitomi Yanaka, Koji Mineshima, Daisuke Bekki
This paper introduces a new video-and-language dataset with human actions for multimodal logical inference, which focuses on intentional and aspectual expressions that describe dynamic human actions.
no code implementations • Findings (ACL) 2021 • Masato Mita, Hitomi Yanaka
There has been an increased interest in data generation approaches to grammatical error correction (GEC) using pseudo data.
1 code implementation • Findings (ACL) 2021 • Hitomi Yanaka, Koji Mineshima, Kentaro Inui
We also find that the generalization performance to unseen combinations is better when the form of meaning representations is simpler.
1 code implementation • EACL 2021 • Hitomi Yanaka, Koji Mineshima, Kentaro Inui
Despite the recent success of deep neural networks in natural language processing, the extent to which they can demonstrate human-like generalization capacities for natural language understanding remains unclear.
1 code implementation • ACL 2020 • Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui
This indicates that the generalization ability of neural models is limited to cases where the syntactic structures are nearly the same as those in the training set.
1 code implementation • WS 2019 • Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui, Satoshi Sekine, Lasha Abzianidze, Johan Bos
Monotonicity reasoning is one of the important reasoning skills for any intelligent natural language inference (NLI) model in that it requires the ability to capture the interaction between lexical and syntactic structures.
no code implementations • ACL 2019 • Riko Suzuki, Hitomi Yanaka, Masashi Yoshikawa, Koji Mineshima, Daisuke Bekki
A large amount of research about multimodal inference across text and vision has been recently developed to obtain visually grounded word and sentence representations.
1 code implementation • SEMEVAL 2019 • Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui, Satoshi Sekine, Lasha Abzianidze, Johan Bos
To investigate this issue, we introduce a new dataset, called HELP, for handling entailments with lexical and logical phenomena.
no code implementations • WS 2018 • Kana Manome, Masashi Yoshikawa, Hitomi Yanaka, Pascual Mart{\'\i}nez-G{\'o}mez, Koji Mineshima, Daisuke Bekki
In this paper, we present a sequence-to-sequence model for generating sentences from logical meaning representations based on event semantics.
1 code implementation • NAACL 2018 • Hitomi Yanaka, Koji Mineshima, Pascual Martinez-Gomez, Daisuke Bekki
How to identify, extract, and use phrasal knowledge is a crucial problem for the task of Recognizing Textual Entailment (RTE).
1 code implementation • EMNLP 2017 • Hitomi Yanaka, Koji Mineshima, Pascual Martinez-Gomez, Daisuke Bekki
Determining semantic textual similarity is a core research subject in natural language processing.