In this work, we perform the first large-scale analysis of discourse in media dialog and its impact on generative modeling of dialog turns, with a focus on interrogative patterns and use of external knowledge.
Prior studies have used pre-trained language models, or relied on small paired recipe data (e. g., a recipe paired with a similar one that satisfies a dietary constraint).
Understanding human language often necessitates understanding entities and their place in a taxonomy of knowledge -- their types.
Conversational recommender systems offer the promise of interactive, engaging ways for users to find items they enjoy.
The large population of home cooks with dietary restrictions is under-served by existing cooking resources and recipe generation models.
Dialog State Tracking (DST), an integral part of modern dialog systems, aims to track user preferences and constraints (slots) in task-oriented dialogs.
Speech recognition (ASR) and speaker diarization (SD) models have traditionally been trained separately to produce rich conversation transcripts with speaker labels.
Compared to existing large-scale proxies for conversational data, language models trained on our dataset exhibit better zero-shot out-of-domain performance on existing spoken dialog datasets, demonstrating its usefulness in modeling real-world conversations.
Existing approaches to recipe generation are unable to create recipes for users with culinary preferences but incomplete knowledge of ingredients in specific dishes.
Ranked #1 on Recipe Generation on Food.com