RASAT: Integrating Relational Structures into Pretrained Seq2Seq Model for Text-to-SQL

Relational structures such as schema linking and schema encoding have been validated as a key component to qualitatively translating natural language into SQL queries. However, introducing these structural relations comes with prices: they often result in a specialized model structure, which largely prohibits the use of large pretrained models in text-to-SQL. To address this problem, we propose RASAT: a Transformer seq2seq architecture augmented with relation-aware self-attention that could leverage a variety of relational structures while at the meantime being able to effectively inherit the pretrained parameters from the T5 model. Our model is able to incorporate almost all types of existing relations in the literature, and in addition, we propose to introduce co-reference relations for the multi-turn scenario. Experimental results on three widely used text-to-SQL datasets, covering both single-turn and multi-turn scenarios, have shown that RASAT could achieve competitive results in all three benchmarks, achieving state-of-the-art performance in execution accuracy (80.5\% EX on Spider, 53.1\% IEX on SParC, and 37.5\% IEX on CoSQL).

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Dialogue State Tracking CoSQL RASAT+PICARD question match accuracy 55.7 # 1
interaction match accuracy 26.5 # 1
Text-To-Sql SParC RASAT+PICARD interaction match accuracy 45.2 # 1
question match accuracy 67.7 # 1
Text-To-Sql SPIDER RASAT+PICARD Exact Match Accuracy (in Dev) 75.3 # 2
Execution Accuracy (in Dev) 80.5 # 1
Text-To-Sql SPIDER RASAT Exact Match Accuracy (in Dev) 72.6 # 3
Execution Accuracy (in Dev) 76.6 # 3

Methods