Search Results for author: Hiroshi Noji

Found 27 papers, 12 papers with code

Generating Racing Game Commentary from Vision, Language, and Structured Data

no code implementations INLG (ACL) 2021 Tatsuya Ishigaki, Goran Topic, Yumi Hamazono, Hiroshi Noji, Ichiro Kobayashi, Yusuke Miyao, Hiroya Takamura

In this study, we introduce a new large-scale dataset that contains aligned video data, structured numerical data, and transcribed commentaries that consist of 129, 226 utterances in 1, 389 races in a game.

Multilingual Syntax-aware Language Modeling through Dependency Tree Conversion

no code implementations spnlp (ACL) 2022 Shunsuke Kando, Hiroshi Noji, Yusuke Miyao

On average, the performance of our best model represents a 19 \% increase in accuracy over the worst choice across all languages.

Language Modelling

Modeling Human Sentence Processing with Left-Corner Recurrent Neural Network Grammars

2 code implementations EMNLP 2021 Ryo Yoshida, Hiroshi Noji, Yohei Oseki

In computational linguistics, it has been shown that hierarchical structures make language models (LMs) more human-like.


An empirical analysis of existing systems and datasets toward general simple question answering

1 code implementation COLING 2020 Namgi Han, Goran Topic, Hiroshi Noji, Hiroya Takamura, Yusuke Miyao

Our analysis, including shifting of training and test datasets and training on a union of the datasets, suggests that our progress in solving SimpleQuestions dataset does not indicate the success of more general simple question answering.

Natural Language Understanding Question Answering

Learning with Contrastive Examples for Data-to-Text Generation

1 code implementation COLING 2020 Yui Uehara, Tatsuya Ishigaki, Kasumi Aoki, Hiroshi Noji, Keiichi Goshima, Ichiro Kobayashi, Hiroya Takamura, Yusuke Miyao

Existing models for data-to-text tasks generate fluent but sometimes incorrect sentences e. g., {``}Nikkei gains{''} is generated when {``}Nikkei drops{''} is expected.

Comment Generation Data-to-Text Generation

CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters

2 code implementations COLING 2020 Hicham El Boukkouri, Olivier Ferret, Thomas Lavergne, Hiroshi Noji, Pierre Zweigenbaum, Junichi Tsujii

Due to the compelling improvements brought by BERT, many recent representation models adopted the Transformer architecture as their main building block, consequently inheriting the wordpiece tokenization system despite it not being intrinsically linked to the notion of Transformers.

Clinical Concept Extraction Drug–drug Interaction Extraction +3

An Analysis of the Utility of Explicit Negative Examples to Improve the Syntactic Abilities of Neural Language Models

1 code implementation ACL 2020 Hiroshi Noji, Hiroya Takamura

Neural language models are commonly trained only on positive examples, a set of sentences in the training data, but recent studies suggest that the models trained in this way are not capable of robustly handling complex syntactic constructions, such as long-distance agreement.


Automatic Generation of High Quality CCGbanks for Parser Domain Adaptation

no code implementations ACL 2019 Masashi Yoshikawa, Hiroshi Noji, Koji Mineshima, Daisuke Bekki

We propose a new domain adaptation method for Combinatory Categorial Grammar (CCG) parsing, based on the idea of automatic generation of CCG corpora exploiting cheaper resources of dependency trees.

Domain Adaptation Math +1

Combining Axiom Injection and Knowledge Base Completion for Efficient Natural Language Inference

1 code implementation15 Nov 2018 Masashi Yoshikawa, Koji Mineshima, Hiroshi Noji, Daisuke Bekki

In logic-based approaches to reasoning tasks such as Recognizing Textual Entailment (RTE), it is important for a system to have a large amount of knowledge data.

Knowledge Base Completion Natural Language Inference +1

Dynamic Feature Selection with Attention in Incremental Parsing

no code implementations COLING 2018 Ryosuke Kohita, Hiroshi Noji, Yuji Matsumoto

One main challenge for incremental transition-based parsers, when future inputs are invisible, is to extract good features from a limited local context.

Dependency Parsing Dialogue Generation +4

Consistent CCG Parsing over Multiple Sentences for Improved Logical Reasoning

no code implementations NAACL 2018 Masashi Yoshikawa, Koji Mineshima, Hiroshi Noji, Daisuke Bekki

In formal logic-based approaches to Recognizing Textual Entailment (RTE), a Combinatory Categorial Grammar (CCG) parser is used to parse input premises and hypotheses to obtain their logical formulas.

Automated Theorem Proving Formal Logic +4

Can Discourse Relations be Identified Incrementally?

no code implementations IJCNLP 2017 Frances Yung, Hiroshi Noji, Yuji Matsumoto

Humans process language word by word and construct partial linguistic structures on the fly before the end of the sentence is perceived.

Discourse Parsing Language Modelling +3

Effective Online Reordering with Arc-Eager Transitions

no code implementations WS 2017 Ryosuke Kohita, Hiroshi Noji, Yuji Matsumoto

We present a new transition system with word reordering for unrestricted non-projective dependency parsing.

Transition-Based Dependency Parsing

Adversarial Training for Cross-Domain Universal Dependency Parsing

no code implementations CONLL 2017 Motoki Sato, Hitoshi Manabe, Hiroshi Noji, Yuji Matsumoto

We describe our submission to the CoNLL 2017 shared task, which exploits the shared common knowledge of a language across different domains via a domain adaptation technique.

Dependency Parsing Domain Adaptation

Multilingual Back-and-Forth Conversion between Content and Function Head for Easy Dependency Parsing

1 code implementation EACL 2017 Ryosuke Kohita, Hiroshi Noji, Yuji Matsumoto

Universal Dependencies (UD) is becoming a standard annotation scheme cross-linguistically, but it is argued that this scheme centering on content words is harder to parse than the conventional one centering on function words.

Dependency Parsing

Left-corner Methods for Syntactic Modeling with Universal Structural Constraints

no code implementations1 Aug 2016 Hiroshi Noji

This connection suggests left-corner methods can be a tool to exploit the universal syntactic constraint that people avoid generating center-embedded structures.


Cannot find the paper you are looking for? You can Submit a new open access paper.