Search Results for author: Atsumoto Ohashi

Found 7 papers, 4 papers with code

JMultiWOZ: A Large-Scale Japanese Multi-Domain Task-Oriented Dialogue Dataset

1 code implementation26 Mar 2024 Atsumoto Ohashi, Ryu Hirai, Shinya Iizuka, Ryuichiro Higashinaka

In this study, towards the advancement of research and development of task-oriented dialogue systems in Japanese, we constructed JMultiWOZ, the first Japanese language large-scale multi-domain task-oriented dialogue dataset.

Dialogue State Tracking Language Modelling +3

Team Flow at DRC2023: Building Common Ground and Text-based Turn-taking in a Travel Agent Spoken Dialogue System

no code implementations21 Dec 2023 Ryu Hirai, Shinya Iizuka, Haruhisa Iseno, Ao Guo, Jingjing Jiang, Atsumoto Ohashi, Ryuichiro Higashinaka

At the Dialogue Robot Competition 2023 (DRC2023), which was held to improve the capability of dialogue robots, our team developed a system that could build common ground and take more natural turns based on user utterance texts.

Team Flow at DRC2022: Pipeline System for Travel Destination Recommendation Task in Spoken Dialogue

no code implementations18 Oct 2022 Ryu Hirai, Atsumoto Ohashi, Ao Guo, Hideki Shiroma, Xulin Zhou, Yukihiko Tone, Shinya Iizuka, Ryuichiro Higashinaka

After the preliminary round of the competition, we found that the low variation in training examples for the NLU and failed recommendation due to the policy used were probably the main reasons for the limited performance of the system.

Dialogue State Tracking Natural Language Understanding +1

Adaptive Natural Language Generation for Task-oriented Dialogue via Reinforcement Learning

1 code implementation COLING 2022 Atsumoto Ohashi, Ryuichiro Higashinaka

When a natural language generation (NLG) component is implemented in a real-world task-oriented dialogue system, it is necessary to generate not only natural utterances as learned on training data but also utterances adapted to the dialogue environment (e. g., noise from environmental sounds) and the user (e. g., users with low levels of understanding ability).

Natural Language Understanding reinforcement-learning +4

Post-processing Networks: Method for Optimizing Pipeline Task-oriented Dialogue Systems using Reinforcement Learning

1 code implementation SIGDIAL (ACL) 2022 Atsumoto Ohashi, Ryuichiro Higashinaka

Many studies have proposed methods for optimizing the dialogue performance of an entire pipeline task-oriented dialogue system by jointly training modules in the system using reinforcement learning.

reinforcement-learning Reinforcement Learning (RL) +1

Cannot find the paper you are looking for? You can Submit a new open access paper.