Search Results for author: Hitomi Yanaka

Found 22 papers, 16 papers with code

On the Multilingual Ability of Decoder-based Pre-trained Language Models: Finding and Controlling Language-Specific Neurons

1 code implementation3 Apr 2024 Takeshi Kojima, Itsuki Okimura, Yusuke Iwasawa, Hitomi Yanaka, Yutaka Matsuo

Additionally, we tamper with less than 1% of the total neurons in each model during inference and demonstrate that tampering with a few language-specific neurons drastically changes the probability of target language occurrence in text generation.

Text Generation

Constructing Multilingual Code Search Dataset Using Neural Machine Translation

1 code implementation27 Jun 2023 Ryo Sekizawa, Nan Duan, Shuai Lu, Hitomi Yanaka

Code search is a task to find programming codes that semantically match the given natural language queries.

Code Search Machine Translation +2

Jamp: Controlled Japanese Temporal Inference Dataset for Evaluating Generalization Capacity of Language Models

1 code implementation19 Jun 2023 Tomoki Sugimoto, Yasumasa Onoe, Hitomi Yanaka

Natural Language Inference (NLI) tasks involving temporal inference remain challenging for pre-trained language models (LMs).

Natural Language Inference

Analyzing Syntactic Generalization Capacity of Pre-trained Language Models on Japanese Honorific Conversion

no code implementations5 Jun 2023 Ryo Sekizawa, Hitomi Yanaka

Using Japanese honorifics is challenging because it requires not only knowledge of the grammatical rules but also contextual information, such as social relationships.


Does Character-level Information Always Improve DRS-based Semantic Parsing?

1 code implementation4 Jun 2023 Tomoya Kurosawa, Hitomi Yanaka

In the experiments, we compare F1-scores by shuffling the order and randomizing character sequences after testing the performance of character-level information.

Semantic Parsing

Is Japanese CCGBank empirically correct? A case study of passive and causative constructions

no code implementations28 Feb 2023 Daisuke Bekki, Hitomi Yanaka

The Japanese CCGBank serves as training and evaluation data for developing Japanese CCG parsers.

Semantic Parsing

Compositional Evaluation on Japanese Textual Entailment and Similarity

1 code implementation9 Aug 2022 Hitomi Yanaka, Koji Mineshima

We also present a stress-test dataset for compositional inference, created by transforming syntactic structures of sentences in JSICK to investigate whether language models are sensitive to word order and case particles.

Natural Language Inference Semantic Textual Similarity +1

Logical Inference for Counting on Semi-structured Tables

1 code implementation ACL 2022 Tomoya Kurosawa, Hitomi Yanaka

Recently, the Natural Language Inference (NLI) task has been studied for semi-structured tables that do not have a strict format.

Natural Language Inference

Building a Video-and-Language Dataset with Human Actions for Multimodal Logical Inference

1 code implementation ACL (mmsr, IWCS) 2021 Riko Suzuki, Hitomi Yanaka, Koji Mineshima, Daisuke Bekki

This paper introduces a new video-and-language dataset with human actions for multimodal logical inference, which focuses on intentional and aspectual expressions that describe dynamic human actions.


Do Grammatical Error Correction Models Realize Grammatical Generalization?

no code implementations Findings (ACL) 2021 Masato Mita, Hitomi Yanaka

There has been an increased interest in data generation approaches to grammatical error correction (GEC) using pseudo data.

Grammatical Error Correction

SyGNS: A Systematic Generalization Testbed Based on Natural Language Semantics

1 code implementation Findings (ACL) 2021 Hitomi Yanaka, Koji Mineshima, Kentaro Inui

We also find that the generalization performance to unseen combinations is better when the form of meaning representations is simpler.

Negation Systematic Generalization

Exploring Transitivity in Neural NLI Models through Veridicality

1 code implementation EACL 2021 Hitomi Yanaka, Koji Mineshima, Kentaro Inui

Despite the recent success of deep neural networks in natural language processing, the extent to which they can demonstrate human-like generalization capacities for natural language understanding remains unclear.

Natural Language Inference Natural Language Understanding

Do Neural Models Learn Systematicity of Monotonicity Inference in Natural Language?

1 code implementation ACL 2020 Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui

This indicates that the generalization ability of neural models is limited to cases where the syntactic structures are nearly the same as those in the training set.

Can neural networks understand monotonicity reasoning?

1 code implementation WS 2019 Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui, Satoshi Sekine, Lasha Abzianidze, Johan Bos

Monotonicity reasoning is one of the important reasoning skills for any intelligent natural language inference (NLI) model in that it requires the ability to capture the interaction between lexical and syntactic structures.

Data Augmentation Natural Language Inference

Multimodal Logical Inference System for Visual-Textual Entailment

no code implementations ACL 2019 Riko Suzuki, Hitomi Yanaka, Masashi Yoshikawa, Koji Mineshima, Daisuke Bekki

A large amount of research about multimodal inference across text and vision has been recently developed to obtain visually grounded word and sentence representations.

Automated Theorem Proving Natural Language Inference +2

Cannot find the paper you are looking for? You can Submit a new open access paper.