Search Results for author: Ichiro Kobayashi

Found 29 papers, 7 papers with code

Multi-Layer Random Perturbation Training for improving Model Generalization Efficiently

no code implementations EMNLP (BlackboxNLP) 2021 Lis Kanashiro Pereira, Yuki Taya, Ichiro Kobayashi

We propose a simple yet effective Multi-Layer RAndom Perturbation Training algorithm (RAPT) to enhance model robustness and generalization.

Generating Racing Game Commentary from Vision, Language, and Structured Data

no code implementations INLG (ACL) 2021 Tatsuya Ishigaki, Goran Topic, Yumi Hamazono, Hiroshi Noji, Ichiro Kobayashi, Yusuke Miyao, Hiroya Takamura

In this study, we introduce a new large-scale dataset that contains aligned video data, structured numerical data, and transcribed commentaries that consist of 129, 226 utterances in 1, 389 races in a game.

Construction and Validation of a Japanese Honorific Corpus Based on Systemic Functional Linguistics

no code implementations DCLRL (LREC) 2022 Muxuan Liu, Ichiro Kobayashi

In Japanese, there are different expressions used in speech depending on the speaker’s and listener’s social status, called honorifics.

Machine Translation Translation

Towards a Language Model for Temporal Commonsense Reasoning

no code implementations RANLP 2021 Mayuko Kimura, Lis Kanashiro Pereira, Ichiro Kobayashi

Temporal commonsense reasoning is a challenging task as it requires temporal knowledge usually not explicit in text.

Language Modelling

Dialogue over Context and Structured Knowledge using a Neural Network Model with External Memories

no code implementations AACL (knlp) 2020 Yuri Murayama, Lis Kanashiro Pereira, Ichiro Kobayashi

The Differentiable Neural Computer (DNC), a neural network model with an addressable external memory, can solve algorithmic and question answering tasks.

Question Answering

AcTED: Automatic Acquisition of Typical Event Duration for Semi-supervised Temporal Commonsense QA

no code implementations27 Mar 2024 Felix Virgo, Fei Cheng, Lis Kanashiro Pereira, Masayuki Asahara, Ichiro Kobayashi, Sadao Kurohashi

We propose a voting-driven semi-supervised approach to automatically acquire the typical duration of an event and use it as pseudo-labeled data.

Towards Parameter-Efficient Integration of Pre-Trained Language Models In Temporal Video Grounding

1 code implementation26 Sep 2022 Erica K. Shimomoto, Edison Marrese-Taylor, Hiroya Takamura, Ichiro Kobayashi, Hideki Nakayama, Yusuke Miyao

This paper explores the task of Temporal Video Grounding (TVG) where, given an untrimmed video and a natural language sentence query, the goal is to recognize and determine temporal boundaries of action instances in the video described by the query.

Benchmarking Natural Language Queries +2

Targeted Adversarial Training for Natural Language Understanding

1 code implementation NAACL 2021 Lis Pereira, Xiaodong Liu, Hao Cheng, Hoifung Poon, Jianfeng Gao, Ichiro Kobayashi

We present a simple yet effective Targeted Adversarial Training (TAT) algorithm to improve adversarial training for natural language understanding.

Natural Language Understanding

Learning with Contrastive Examples for Data-to-Text Generation

1 code implementation COLING 2020 Yui Uehara, Tatsuya Ishigaki, Kasumi Aoki, Hiroshi Noji, Keiichi Goshima, Ichiro Kobayashi, Hiroya Takamura, Yusuke Miyao

Existing models for data-to-text tasks generate fluent but sometimes incorrect sentences e. g., {``}Nikkei gains{''} is generated when {``}Nikkei drops{''} is expected.

Comment Generation Data-to-Text Generation

Generating Market Comments Referring to External Resources

1 code implementation WS 2018 Tatsuya Aoki, Akira Miyazawa, Tatsuya Ishigaki, Keiichi Goshima, Kasumi Aoki, Ichiro Kobayashi, Hiroya Takamura, Yusuke Miyao

Comments on a stock market often include the reason or cause of changes in stock prices, such as {``}Nikkei turns lower as yen{'}s rise hits exporters.

Text Generation

Describing Semantic Representations of Brain Activity Evoked by Visual Stimuli

no code implementations19 Jan 2018 Eri Matsuo, Ichiro Kobayashi, Shinji Nishimoto, Satoshi Nishida, Hideki Asoh

The results demonstrate that the proposed model can decode brain activity and generate descriptions using natural language sentences.

Image Captioning Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.