Incorporating Joint Embeddings into Goal-Oriented Dialogues with Multi-Task Learning

28 Jan 2020  ·  Firas Kassawat, Debanjan Chaudhuri, Jens Lehmann ·

Attention-based encoder-decoder neural network models have recently shown promising results in goal-oriented dialogue systems. However, these models struggle to reason over and incorporate state-full knowledge while preserving their end-to-end text generation functionality. Since such models can greatly benefit from user intent and knowledge graph integration, in this paper we propose an RNN-based end-to-end encoder-decoder architecture which is trained with joint embeddings of the knowledge graph and the corpus as input. The model provides an additional integration of user intent along with text generation, trained with a multi-task learning paradigm along with an additional regularization technique to penalize generating the wrong entity as output. The model further incorporates a Knowledge Graph entity lookup during inference to guarantee the generated output is state-full based on the local knowledge graph provided. We finally evaluated the model using the BLEU score, empirical evaluation depicts that our proposed architecture can aid in the betterment of task-oriented dialogue system`s performance.

PDF Abstract


  Add Datasets introduced or used in this paper

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Goal-Oriented Dialog Kvret IJEEL-KVL BLEU 0.1831 # 1
Embedding Average 95.5 # 1
Vector Extrema 97.4 # 1
Greedy Matching 62.5 # 1


No methods listed for this paper. Add relevant methods here