48 papers with code • 2 benchmarks • 8 datasets
Our model outperforms previous state-of-the-art model by a large margin and achieves new state-of-the-art results on the two datasets.
The generalization challenge lies in (a) encoding the database relations in an accessible way for the semantic parser, and (b) modeling alignment between database columns and their mentions in a given query.
Ranked #3 on Semantic Parsing on spider
We explore using T5 (Raffel et al. (2019)) to directly translate natural language questions into SQL statements.
Existing state-of-the-art approaches rely on reinforcement learning to reward the decoder when it generates any of the equivalent serializations.
A significant amount of the world's knowledge is stored in relational databases.
Ranked #8 on Code Generation on WikiSQL
Sequence-to-sequence (seq2seq) models are prevalent in semantic parsing, but have been found to struggle at out-of-distribution compositional generalization.
Recent years have witnessed the burgeoning of pretrained language models (LMs) for text-based natural language (NL) understanding tasks.
Ranked #1 on Semantic Parsing on WikiTableQuestions
We propose test suite accuracy to approximate semantic accuracy for Text-to-SQL models.
We define a new complex and cross-domain semantic parsing and text-to-SQL task where different complex SQL queries and databases appear in train and test sets.
Ranked #4 on Semantic Parsing on spider