PICARD: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models

Large pre-trained language models for textual data have an unconstrained output space; at each decoding step, they can produce any of 10,000s of sub-word tokens. When fine-tuned to target constrained formal languages like SQL, these models often generate invalid code, rendering it unusable. We propose PICARD (code and trained models available at https://github.com/ElementAI/picard), a method for constraining auto-regressive decoders of language models through incremental parsing. PICARD helps to find valid output sequences by rejecting inadmissible tokens at each decoding step. On the challenging Spider and CoSQL text-to-SQL translation tasks, we show that PICARD transforms fine-tuned T5 models with passable performance into state-of-the-art solutions.

PDF Abstract EMNLP 2021 PDF EMNLP 2021 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Dialogue State Tracking CoSQL T5-3B + PICARD question match accuracy 54.6 # 2
interaction match accuracy 23.7 # 3
Text-To-SQL spider T5-3B + PICARD Exact Match Accuracy (Dev) 75.5 # 2
Exact Match Accuracy (Test) 71.9 # 2
Semantic Parsing spider T5-3B + PICARD Accuracy 71.9 # 5

Methods