Learning Contextual Representations for Semantic Parsing with Generation-Augmented Pre-Training

Most recently, there has been significant interest in learning contextual representations for various NLP tasks, by leveraging large scale text corpora to train large neural language models with self-supervised learning objectives, such as Masked Language Model (MLM). However, based on a pilot study, we observe three issues of existing general-purpose language models when they are applied to text-to-SQL semantic parsers: fail to detect column mentions in the utterances, fail to infer column mentions from cell values, and fail to compose complex SQL queries... (read more)

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK RESULT BENCHMARK
Semantic Parsing spider RATSQL + GAP Accuracy 69.7 # 1
Text-To-Sql spider RATSQL + GAP Accuracy (Dev) 71.8 # 1
Accuracy (Test) 69.7 # 1

Methods used in the Paper


METHOD TYPE
🤖 No Methods Found Help the community by adding them if they're not listed; e.g. Deep Residual Learning for Image Recognition uses ResNet