Search Results for author: Koji Mineshima

Found 33 papers, 18 papers with code

Computational Semantics and Evaluation Benchmark for Interrogative Sentences via Combinatory Categorial Grammar

no code implementations22 Dec 2023 Hayate Funakura, Koji Mineshima

We present a compositional semantics for various types of polar questions and wh-questions within the framework of Combinatory Categorial Grammar (CCG).

Question Answering

Evaluating Large Language Models with NeuBAROCO: Syllogistic Reasoning Ability and Human-like Biases

no code implementations21 Jun 2023 Risako Ando, Takanobu Morishita, Hirohiko Abe, Koji Mineshima, Mitsuhiro Okada

Our findings demonstrate that current large language models struggle more with problems involving these three types of biases.

Logical Reasoning

Compositional Evaluation on Japanese Textual Entailment and Similarity

1 code implementation9 Aug 2022 Hitomi Yanaka, Koji Mineshima

We also present a stress-test dataset for compositional inference, created by transforming syntactic structures of sentences in JSICK to investigate whether language models are sensitive to word order and case particles.

Natural Language Inference Semantic Textual Similarity +1

Building a Video-and-Language Dataset with Human Actions for Multimodal Logical Inference

1 code implementation ACL (mmsr, IWCS) 2021 Riko Suzuki, Hitomi Yanaka, Koji Mineshima, Daisuke Bekki

This paper introduces a new video-and-language dataset with human actions for multimodal logical inference, which focuses on intentional and aspectual expressions that describe dynamic human actions.

Negation

SyGNS: A Systematic Generalization Testbed Based on Natural Language Semantics

1 code implementation Findings (ACL) 2021 Hitomi Yanaka, Koji Mineshima, Kentaro Inui

We also find that the generalization performance to unseen combinations is better when the form of meaning representations is simpler.

Negation Systematic Generalization

Visual representation of negation: Real world data analysis on comic image design

no code implementations21 May 2021 Yuri Sato, Koji Mineshima, Kazuhiro Ueda

There has been a widely held view that visual representations (e. g., photographs and illustrations) do not depict negation, for example, one that can be expressed by a sentence "the train is not coming".

Image Captioning Image Classification +2

Exploring Transitivity in Neural NLI Models through Veridicality

1 code implementation EACL 2021 Hitomi Yanaka, Koji Mineshima, Kentaro Inui

Despite the recent success of deep neural networks in natural language processing, the extent to which they can demonstrate human-like generalization capacities for natural language understanding remains unclear.

Natural Language Inference Natural Language Understanding

Combining Event Semantics and Degree Semantics for Natural Language Inference

1 code implementation COLING 2020 Izumi Haruta, Koji Mineshima, Daisuke Bekki

In formal semantics, there are two well-developed semantic frameworks: event semantics, which treats verbs and adverbial modifiers using the notion of event, and degree semantics, which analyzes adjectives and comparatives using the notion of degree.

Natural Language Inference

Logical Inferences with Comparatives and Generalized Quantifiers

1 code implementation ACL 2020 Izumi Haruta, Koji Mineshima, Daisuke Bekki

Comparative constructions pose a challenge in Natural Language Inference (NLI), which is the task of determining whether a text entails a hypothesis.

Automated Theorem Proving Natural Language Inference

Do Neural Models Learn Systematicity of Monotonicity Inference in Natural Language?

1 code implementation ACL 2020 Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui

This indicates that the generalization ability of neural models is limited to cases where the syntactic structures are nearly the same as those in the training set.

Can neural networks understand monotonicity reasoning?

1 code implementation WS 2019 Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui, Satoshi Sekine, Lasha Abzianidze, Johan Bos

Monotonicity reasoning is one of the important reasoning skills for any intelligent natural language inference (NLI) model in that it requires the ability to capture the interaction between lexical and syntactic structures.

Data Augmentation Natural Language Inference

Multimodal Logical Inference System for Visual-Textual Entailment

no code implementations ACL 2019 Riko Suzuki, Hitomi Yanaka, Masashi Yoshikawa, Koji Mineshima, Daisuke Bekki

A large amount of research about multimodal inference across text and vision has been recently developed to obtain visually grounded word and sentence representations.

Automated Theorem Proving Natural Language Inference +2

Automatic Generation of High Quality CCGbanks for Parser Domain Adaptation

no code implementations ACL 2019 Masashi Yoshikawa, Hiroshi Noji, Koji Mineshima, Daisuke Bekki

We propose a new domain adaptation method for Combinatory Categorial Grammar (CCG) parsing, based on the idea of automatic generation of CCG corpora exploiting cheaper resources of dependency trees.

Domain Adaptation Math +1

Questions in Dependent Type Semantics

no code implementations WS 2019 Kazuki Watanabe, Koji Mineshima, Daisuke Bekki

The basic idea is to assign the same type to both declarative sentences and interrogative sentences, partly building on the recent proposal in Inquisitive Semantics.

Natural Language Inference RTE +2

Combining Axiom Injection and Knowledge Base Completion for Efficient Natural Language Inference

1 code implementation15 Nov 2018 Masashi Yoshikawa, Koji Mineshima, Hiroshi Noji, Daisuke Bekki

In logic-based approaches to reasoning tasks such as Recognizing Textual Entailment (RTE), it is important for a system to have a large amount of knowledge data.

Knowledge Base Completion Natural Language Inference +1

Consistent CCG Parsing over Multiple Sentences for Improved Logical Reasoning

no code implementations NAACL 2018 Masashi Yoshikawa, Koji Mineshima, Hiroshi Noji, Daisuke Bekki

In formal logic-based approaches to Recognizing Textual Entailment (RTE), a Combinatory Categorial Grammar (CCG) parser is used to parse input premises and hypotheses to obtain their logical formulas.

Automated Theorem Proving Formal Logic +4

Visual Denotations for Recognizing Textual Entailment

no code implementations EMNLP 2017 Dan Han, Pascual Mart{\'\i}nez-G{\'o}mez, Koji Mineshima

In the logic approach to Recognizing Textual Entailment, identifying phrase-to-phrase semantic relations is still an unsolved problem.

Natural Language Inference Semantic Composition

Cannot find the paper you are looking for? You can Submit a new open access paper.