In this paper, we present Conquest, a framework for generating synthetic datasets for contextual question paraphrasing.
Building a conversational embodied agent to execute real-life tasks has been a long-standing yet quite challenging research goal, as it requires effective human-agent communication, multi-modal understanding, long-range sequential decision making, etc.
A long-term goal of AI research is to build intelligent agents that can communicate with humans in natural language, perceive the environment, and perform real-world tasks.
However, the performance of pre-trained models on task-oriented dialog tasks is still under-explored.
Conversational systems enable numerous valuable applications, and question-answering is an important component underlying many of these.
We pro-pose a data annealing transfer learning procedure to bridge the performance gap on informal natural language understanding tasks.
The recent success of large pre-trained language models such as BERT and GPT-2 has suggested the effectiveness of incorporating language priors in downstream dialog generation tasks.