Agent-Aware Dropout DQN for Safe and Efficient On-line Dialogue Policy Learning

EMNLP 2017  ·  Lu Chen, Xiang Zhou, Cheng Chang, Runzhe Yang, Kai Yu ·

Hand-crafted rules and reinforcement learning (RL) are two popular choices to obtain dialogue policy. The rule-based policy is often reliable within predefined scope but not self-adaptable, whereas RL is evolvable with data but often suffers from a bad initial performance. We employ a \textit{companion learning} framework to integrate the two approaches for \textit{on-line} dialogue policy learning, in which a pre-defined rule-based policy acts as a {``}teacher{''} and guides a data-driven RL system by giving example actions as well as additional rewards. A novel \textit{agent-aware dropout} Deep Q-Network (AAD-DQN) is proposed to address the problem of when to consult the teacher and how to learn from the teacher{'}s experiences. AAD-DQN, as a data-driven student policy, provides (1) two separate experience memories for student and teacher, (2) an uncertainty estimated by dropout to control the timing of consultation and learning. Simulation experiments showed that the proposed approach can significantly improve both \textit{safety}and \textit{efficiency} of on-line policy optimization compared to other companion learning approaches as well as supervised pre-training using static dialogue corpus.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods