Search Results for author: Seonghan Ryu

Found 7 papers, 1 papers with code

Deep Reinforcement Learning for Chatbots Using Clustered Actions and Human-Likeness Rewards

no code implementations27 Aug 2019 Heriberto Cuayáhuitl, Donghyeon Lee, Seonghan Ryu, Sungja Choi, Inchul Hwang, Jihie Kim

Training chatbots using the reinforcement learning paradigm is challenging due to high-dimensional states, infinite action spaces and the difficulty in specifying the reward function.

reinforcement-learning Reinforcement Learning (RL) +3

Ensemble-Based Deep Reinforcement Learning for Chatbots

no code implementations27 Aug 2019 Heriberto Cuayáhuitl, Donghyeon Lee, Seonghan Ryu, Yongjin Cho, Sungja Choi, Satish Indurthi, Seunghak Yu, Hyungtak Choi, Inchul Hwang, Jihie Kim

Experimental results using chitchat data reveal that (1) near human-like dialogue policies can be induced, (2) generalisation to unseen data is a difficult problem, and (3) training an ensemble of chatbot agents is essential for improved performance over using a single agent.

Chatbot Clustering +4

A Study on Dialogue Reward Prediction for Open-Ended Conversational Agents

no code implementations2 Dec 2018 Heriberto Cuayáhuitl, Seonghan Ryu, Donghyeon Lee, Jihie Kim

The amount of dialogue history to include in a conversational agent is often underestimated and/or set in an empirical and thus possibly naive way.

Cannot find the paper you are looking for? You can Submit a new open access paper.