Furthermore, we compare models trained on our data with models trained on human-written data -- ELI5 and ASQA for LFQA and CNN-DailyMail for Summarization.
Intent detection with semantically similar fine-grained intents is a challenging task.
The ability to compare the semantic similarity between text corpora is important in a variety of natural language processing applications.
The field of emergent communication aims to understand the characteristics of communication as it emerges from artificial agents solving tasks that require information exchange.
Prominent questions about the role of sensory vs. linguistic input in the way we acquire and use language have been extensively studied in the psycholinguistic literature.
Models for text generation have become focal for many research tasks and especially for the generation of sentence corpora.
The NNSI reduces the need for manual labeling by automatically selecting highly-ambiguous samples and labeling them with high accuracy.
We show that intent prediction can be improved by training a deep text-to-text neural model to generate successive user utterances from unlabeled dialogue data.
Data balancing is a known technique for improving the performance of classification tasks.
Based on recent advances in natural language modeling and those in text generation capabilities, we propose a novel data augmentation method for text classification tasks.
We propose a method for end-to-end training of a base neural network that integrates calls to existing black-box functions.
At inference time, we replace each estimator with its existing application counterpart and let the base network solve the task by interacting with the existing application.