no code implementations • EMNLP (NLP4ConvAI) 2021 • Ao Guo, Atsumoto Ohashi, Ryu Hirai, Yuya Chiba, Yuiko Tsunomori, Ryuichiro Higashinaka
Endowing a task-oriented dialogue system with adaptiveness to user personality can greatly help improve the performance of a dialogue task.
1 code implementation • 31 Mar 2024 • Atsumoto Ohashi, Ukyo Honda, Tetsuro Morimura, Yuu Jinnai
Minimum Bayes-risk (MBR) decoding has recently gained renewed attention in text generation.
1 code implementation • 26 Mar 2024 • Atsumoto Ohashi, Ryu Hirai, Shinya Iizuka, Ryuichiro Higashinaka
In this study, towards the advancement of research and development of task-oriented dialogue systems in Japanese, we constructed JMultiWOZ, the first Japanese language large-scale multi-domain task-oriented dialogue dataset.
no code implementations • 21 Dec 2023 • Ryu Hirai, Shinya Iizuka, Haruhisa Iseno, Ao Guo, Jingjing Jiang, Atsumoto Ohashi, Ryuichiro Higashinaka
At the Dialogue Robot Competition 2023 (DRC2023), which was held to improve the capability of dialogue robots, our team developed a system that could build common ground and take more natural turns based on user utterance texts.
no code implementations • 18 Oct 2022 • Ryu Hirai, Atsumoto Ohashi, Ao Guo, Hideki Shiroma, Xulin Zhou, Yukihiko Tone, Shinya Iizuka, Ryuichiro Higashinaka
After the preliminary round of the competition, we found that the low variation in training examples for the NLU and failed recommendation due to the policy used were probably the main reasons for the limited performance of the system.
1 code implementation • COLING 2022 • Atsumoto Ohashi, Ryuichiro Higashinaka
When a natural language generation (NLG) component is implemented in a real-world task-oriented dialogue system, it is necessary to generate not only natural utterances as learned on training data but also utterances adapted to the dialogue environment (e. g., noise from environmental sounds) and the user (e. g., users with low levels of understanding ability).
1 code implementation • SIGDIAL (ACL) 2022 • Atsumoto Ohashi, Ryuichiro Higashinaka
Many studies have proposed methods for optimizing the dialogue performance of an entire pipeline task-oriented dialogue system by jointly training modules in the system using reinforcement learning.