SCoRe: Pre-Training for Context Representation in Conversational Semantic Parsing

Conversational Semantic Parsing (CSP) is the task of converting a sequence of natural language queries to formal language (e.g., SQL, SPARQL) that can be executed against a structured ontology (e.g. databases, knowledge bases). To accomplish this task, a CSP system needs to model the relation between the unstructured language utterance and the structured ontology while representing the multi-turn dynamics of the dialog. Pre-trained language models (LMs) are the state-of-the-art for various natural language processing tasks. However, existing pre-trained LMs that use language modeling training objectives over free-form text have limited ability to represent natural language references to contextual structural data. In this work, we present SCORE, a new pre-training approach for CSP tasks designed to induce representations that capture the alignment between the dialogue flow and the structural context. We demonstrate the broad applicability of SCORE to CSP tasks by combining SCORE with strong base systems on four different tasks (SPARC, COSQL, MWOZ, and SQA). We show that SCORE can improve the performance over all these base systems by a significant margin and achieves state-of-the-art results on three of them. Our implementation and checkpoints of the model will be available at Anonymous URL.

PDF Abstract NeurIPS Workshop 2020 PDF NeurIPS Workshop 2020 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Dialogue State Tracking CoSQL RAT-SQL + SCoRe question match accuracy 51.6 # 4
interaction match accuracy 21.2 # 4
Multi-domain Dialogue State Tracking MULTIWOZ 2.1 TripPy + SCoRe Joint Acc 60.48 # 3
Text-To-SQL SParC RAT-SQL + SCoRe interaction match accuracy 38.1 # 4
question match accuracy 62.4 # 4

Methods


No methods listed for this paper. Add relevant methods here