Search Results for author: Daisuke Bekki

Found 26 papers, 14 papers with code

Is Japanese CCGBank empirically correct? A case study of passive and causative constructions

no code implementations28 Feb 2023 Daisuke Bekki, Hitomi Yanaka

The Japanese CCGBank serves as training and evaluation data for developing Japanese CCG parsers.

Semantic Parsing

Building a Video-and-Language Dataset with Human Actions for Multimodal Logical Inference

1 code implementation ACL (mmsr, IWCS) 2021 Riko Suzuki, Hitomi Yanaka, Koji Mineshima, Daisuke Bekki

This paper introduces a new video-and-language dataset with human actions for multimodal logical inference, which focuses on intentional and aspectual expressions that describe dynamic human actions.

Negation

Combining Event Semantics and Degree Semantics for Natural Language Inference

1 code implementation COLING 2020 Izumi Haruta, Koji Mineshima, Daisuke Bekki

In formal semantics, there are two well-developed semantic frameworks: event semantics, which treats verbs and adverbial modifiers using the notion of event, and degree semantics, which analyzes adjectives and comparatives using the notion of degree.

Natural Language Inference

Logical Inferences with Comparatives and Generalized Quantifiers

1 code implementation ACL 2020 Izumi Haruta, Koji Mineshima, Daisuke Bekki

Comparative constructions pose a challenge in Natural Language Inference (NLI), which is the task of determining whether a text entails a hypothesis.

Automated Theorem Proving Natural Language Inference

Do Neural Models Learn Systematicity of Monotonicity Inference in Natural Language?

1 code implementation ACL 2020 Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui

This indicates that the generalization ability of neural models is limited to cases where the syntactic structures are nearly the same as those in the training set.

Can neural networks understand monotonicity reasoning?

1 code implementation WS 2019 Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui, Satoshi Sekine, Lasha Abzianidze, Johan Bos

Monotonicity reasoning is one of the important reasoning skills for any intelligent natural language inference (NLI) model in that it requires the ability to capture the interaction between lexical and syntactic structures.

Data Augmentation Natural Language Inference

Multimodal Logical Inference System for Visual-Textual Entailment

no code implementations ACL 2019 Riko Suzuki, Hitomi Yanaka, Masashi Yoshikawa, Koji Mineshima, Daisuke Bekki

A large amount of research about multimodal inference across text and vision has been recently developed to obtain visually grounded word and sentence representations.

Automated Theorem Proving Natural Language Inference +2

Automatic Generation of High Quality CCGbanks for Parser Domain Adaptation

no code implementations ACL 2019 Masashi Yoshikawa, Hiroshi Noji, Koji Mineshima, Daisuke Bekki

We propose a new domain adaptation method for Combinatory Categorial Grammar (CCG) parsing, based on the idea of automatic generation of CCG corpora exploiting cheaper resources of dependency trees.

Domain Adaptation Math +1

Questions in Dependent Type Semantics

no code implementations WS 2019 Kazuki Watanabe, Koji Mineshima, Daisuke Bekki

The basic idea is to assign the same type to both declarative sentences and interrogative sentences, partly building on the recent proposal in Inquisitive Semantics.

Natural Language Inference RTE +2

Combining Axiom Injection and Knowledge Base Completion for Efficient Natural Language Inference

1 code implementation15 Nov 2018 Masashi Yoshikawa, Koji Mineshima, Hiroshi Noji, Daisuke Bekki

In logic-based approaches to reasoning tasks such as Recognizing Textual Entailment (RTE), it is important for a system to have a large amount of knowledge data.

Knowledge Base Completion Natural Language Inference +1

Consistent CCG Parsing over Multiple Sentences for Improved Logical Reasoning

no code implementations NAACL 2018 Masashi Yoshikawa, Koji Mineshima, Hiroshi Noji, Daisuke Bekki

In formal logic-based approaches to Recognizing Textual Entailment (RTE), a Combinatory Categorial Grammar (CCG) parser is used to parse input premises and hypotheses to obtain their logical formulas.

Automated Theorem Proving Formal Logic +4

Cannot find the paper you are looking for? You can Submit a new open access paper.