no code implementations • SIGDIAL (ACL) 2020 • Tsunehiro Arimoto, Ryuichiro Higashinaka, Kou Tanaka, Takahito Kawanishi, Hiroaki Sugiyama, Hiroshi Sawada, Hiroshi Ishiguro
We are studying a cooperation style where multiple speakers can provide both advanced dialogue services and operator education.
no code implementations • EMNLP (NLP4ConvAI) 2021 • Ao Guo, Atsumoto Ohashi, Ryu Hirai, Yuya Chiba, Yuiko Tsunomori, Ryuichiro Higashinaka
Endowing a task-oriented dialogue system with adaptiveness to user personality can greatly help improve the performance of a dialogue task.
1 code implementation • NAACL (sdp) 2021 • Hiromi Narimatsu, Kohei Koyama, Kohji Dohsaka, Ryuichiro Higashinaka, Yasuhiro Minami, Hirotoshi Taira
Then, we create a dataset of academic papers that can be used for the evaluation of each task as well as a series of these tasks.
no code implementations • LREC 2022 • Michimasa Inaba, Yuya Chiba, Ryuichiro Higashinaka, Kazunori Komatani, Yusuke Miyao, Takayuki Nagai
This paper provides details of the dialogue task, the collection procedure and annotations, and the analysis on the characteristics of the dialogues and facial expressions focusing on the age of the speakers.
no code implementations • LREC 2022 • Koh Mitsuda, Ryuichiro Higashinaka, Yuhei Oga, Sen Yoshida
To develop a dialogue system that can build common ground with users, the process of building common ground through dialogue needs to be clarified.
1 code implementation • SIGDIAL (ACL) 2021 • Ryuichiro Higashinaka, Masahiro Araki, Hiroshi Tsukahara, Masahiro Mizukami
This paper proposes a taxonomy of errors in chat-oriented dialogue systems.
no code implementations • LREC 2022 • Yuki Furuya, Koki Saito, Kosuke Ogura, Koh Mitsuda, Ryuichiro Higashinaka, Kazunori Takashio
Building common ground with users is essential for dialogue agent systems and robots to interact naturally with people.
no code implementations • LREC 2022 • Sanae Yamashita, Ryuichiro Higashinaka
In this study, we conducted a data collection experiment in which one of two operators talked to a user and switched with the other operator periodically while exchanging notes when the handovers took place.
no code implementations • LREC 2022 • Takuma Ichikawa, Ryuichiro Higashinaka
We also collected third-person evaluations of the gardens and analyzed the relationship between dialogue and collaborative work that received high scores on the subjective and third-person evaluations in order to identify dialogic factors for high-quality collaborative work.
no code implementations • LREC 2022 • Saki Sudo, Kyoshiro Asano, Koh Mitsuda, Ryuichiro Higashinaka, Yugo Takeuchi
This study investigates how the grounding process is composed and explores new interaction approaches that adapt to human cognitive processes that have not yet been significantly studied.
no code implementations • ACL 2022 • Koh Mitsuda, Ryuichiro Higashinaka, Tingxuan Li, Sen Yoshida
Creating chatbots to behave like real people is important in terms of believability.
1 code implementation • 26 Mar 2024 • Atsumoto Ohashi, Ryu Hirai, Shinya Iizuka, Ryuichiro Higashinaka
In this study, towards the advancement of research and development of task-oriented dialogue systems in Japanese, we constructed JMultiWOZ, the first Japanese language large-scale multi-domain task-oriented dialogue dataset.
no code implementations • 21 Dec 2023 • Ryu Hirai, Shinya Iizuka, Haruhisa Iseno, Ao Guo, Jingjing Jiang, Atsumoto Ohashi, Ryuichiro Higashinaka
At the Dialogue Robot Competition 2023 (DRC2023), which was held to improve the capability of dialogue robots, our team developed a system that could build common ground and take more natural turns based on user utterance texts.
no code implementations • 10 Nov 2023 • Junya Morita, Tatsuya Yui, Takeru Amaya, Ryuichiro Higashinaka, Yugo Takeuchi
For generative AIs to be trustworthy, establishing transparent common grounding with humans is essential.
no code implementations • 18 Oct 2022 • Ryu Hirai, Atsumoto Ohashi, Ao Guo, Hideki Shiroma, Xulin Zhou, Yukihiko Tone, Shinya Iizuka, Ryuichiro Higashinaka
After the preliminary round of the competition, we found that the low variation in training examples for the NLU and failed recommendation due to the policy used were probably the main reasons for the limited performance of the system.
1 code implementation • COLING 2022 • Atsumoto Ohashi, Ryuichiro Higashinaka
When a natural language generation (NLG) component is implemented in a real-world task-oriented dialogue system, it is necessary to generate not only natural utterances as learned on training data but also utterances adapted to the dialogue environment (e. g., noise from environmental sounds) and the user (e. g., users with low levels of understanding ability).
1 code implementation • SIGDIAL (ACL) 2022 • Atsumoto Ohashi, Ryuichiro Higashinaka
Many studies have proposed methods for optimizing the dialogue performance of an entire pipeline task-oriented dialogue system by jointly training modules in the system using reinforcement learning.
no code implementations • LREC 2020 • Takashi Kodama, Ryuichiro Higashinaka, Koh Mitsuda, Ryo Masumura, Yushi Aono, Ryuta Nakamura, Noritake Adachi, Hidetoshi Kawabata
This paper concerns the problem of realizing consistent personalities in neural conversational modeling by using user generated question-answer pairs as training data.
no code implementations • EMNLP 2018 • Ryo Masumura, Yusuke Shinohara, Ryuichiro Higashinaka, Yushi Aono
This is achieved by introducing both language-specific networks shared among different tasks and task-specific networks shared among different languages.
no code implementations • COLING 2018 • Ryo Masumura, Tomohiro Tanaka, Ryuichiro Higashinaka, Hirokazu Masataki, Yushi Aono
In addition, in order to effectively transfer knowledge between different task data sets and different language data sets, this paper proposes a partially-shared modeling method that possesses both shared components and components specific to individual data sets.
no code implementations • WS 2018 • Ryuichiro Higashinaka, Masahiro Mizukami, Hidetoshi Kawabata, Emi Yamaguchi, Noritake Adachi, Junji Tomita
Having consistent personalities is important for chatbots if we want them to be believable.
no code implementations • WS 2018 • Ryo Masumura, Tomohiro Tanaka, Atsushi Ando, Ryo Ishii, Ryuichiro Higashinaka, Yushi Aono
This paper proposes a fully neural network based dialogue-context online end-of-turn detection method that can utilize long-range interactive information extracted from both speaker{'}s utterances and collocutor{'}s utterances.
no code implementations • WS 2018 • Kazuki Sakai, Ryuichiro Higashinaka, Yuichiro Yoshikawa, Hiroshi Ishiguro, Junji Tomita
The results suggest that inserting the question-answer dialogue enhances familiarity and naturalness.
no code implementations • IJCNLP 2017 • Koh Mitsuda, Ryuichiro Higashinaka, Junji Tomita
In this paper, we explored the effect of conveying understanding results of user utterances in a chat-oriented dialogue system by an experiment using human subjects.
no code implementations • IJCNLP 2017 • Ryo Masumura, Taichi Asami, Hirokazu Masataki, Kugatsu Sadamitsu, Kyosuke Nishida, Ryuichiro Higashinaka
In addition, this paper reveals relationships between hyperspherical QLMs and conventional QLMs.
no code implementations • WS 2016 • Yukinori Homma, Kugatsu Sadamitsu, Kyosuke Nishida, Ryuichiro Higashinaka, Hisako Asano, Yoshihiro Matsuo
This paper describes a hierarchical neural network we propose for sentence classification to extract product information from product documents.
no code implementations • LREC 2016 • Ryuichiro Higashinaka, Kotaro Funakoshi, Yuka Kobayashi, Michimasa Inaba
Dialogue breakdown detection is a promising technique in dialogue systems.
no code implementations • COLING 2014 • Ryuichiro Higashinaka, Kenji Imamura, Toyomi Meguro, Chiaki Miyazaki, Nozomi Kobayashi, Hiroaki Sugiyama, Toru Hirano, Toshiro Makino, Yoshihiro Matsuo
Open-Domain Question Answering Task-Oriented Dialogue Systems
no code implementations • LREC 2014 • Kugatsu Sadamitsu, Ryuichiro Higashinaka, Yoshihiro Matsuo
This paper proposes a method for extracting Daily Changing Words (DCWs), words that indicate which questions are real-time dependent.