Natural language generation (NLG) is a critical component in a spoken
dialogue system. This paper presents a Recurrent Neural Network based
Encoder-Decoder architecture, in which an LSTM-based decoder is introduced to
select, aggregate semantic elements produced by an attention mechanism over the
input elements, and to produce the required utterances...
The proposed generator
can be jointly trained both sentence planning and surface realization to
produce natural language sentences. The proposed model was extensively
evaluated on four different NLG datasets. The experimental results showed that
the proposed generators not only consistently outperform the previous methods
across all the NLG domains but also show an ability to generalize from a new,
unseen domain and learn from multi-domain datasets.