With counterfactual bandit learning, models can be trained based on positive and negative feedback received for historical predictions, with no labeled data needed.
As a solution, we propose a multilingual paraphrase generation model that can be used to generate novel utterances for a target feature and target language.
(2) Conversely, using an encoder to warm-start seq2seq training, we show that by unfreezing the encoder partway through training, we can match task performance of a from-scratch seq2seq model.
Recent progress through advanced neural models pushed the performance of task-oriented dialog systems to almost perfect accuracy on existing benchmark datasets for intent classification and slot labeling.
In large-scale commercial dialog systems, users express the same request in a wide variety of alternative ways with a long tail of less frequent alternatives.
While recent progress on abstractive summarization has led to remarkably fluent summaries, factual errors in generated summaries still severely limit their use in practice.
Concept map-based multi-document summarization has recently been proposed as a variant of the traditional summarization task with graph-structured summaries.
Concept-map-based multi-document summarization is a variant of traditional summarization that produces structured summaries in the form of concept maps.
Many techniques to automatically extract different types of graphs, showing for example entities or concepts and different relationships between them, have been suggested.
Concept maps can be used to concisely represent important information and bring structure into large document collections.