Text generation is the task of generating text with the goal of appearing indistinguishable to human-written text.
|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
In particular, we propose a tree based autoencoder to encode discrete text data into continuous vector space, upon which we optimize the adversarial perturbation.
We claim that an open set of templates is crucial for enriching the phrase constructions and realizing varied generations. Learning such templates is prohibitive since it often requires a large paired <table, description>, which is seldom available.
Text generation is a critical and difficult natural language processing task.
The dominant approaches to sentence representation in natural language rely on learning embeddings on massive corpuses.
First, a local alignment model based on multi-instance learning is applied to build the semantic correspondences within a data-text pair.
Neural language models have recently shown impressive gains in unconditional text generation, but controllable generation and manipulation of text remain challenging.
Score-function-based text generation approaches such as REINFORCE, in general, suffer from high computational complexity and training instability problems.
The style encoder extracts a style code from the reference example, and the text decoder generates texts based on the style code and the context.