Recent deep learning models have shown improving results to natural language
generation (NLG) irrespective of providing sufficient annotated data. However,
a modest training data may harm such models performance...
Thus, how to build a
generator that can utilize as much of knowledge from a low-resource setting
data is a crucial issue in NLG. This paper presents a variational neural-based
generation model to tackle the NLG problem of having limited labeled dataset,
in which we integrate a variational inference into an encoder-decoder generator
and introduce a novel auxiliary autoencoding with an effective training
procedure. Experiments showed that the proposed methods not only outperform the
previous models when having sufficient training dataset but also show strong
ability to work acceptably well when the training data is scarce.