( Image credit: SyntaxSQLNet )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Sequence-to-sequence (seq2seq) models are prevalent in semantic parsing, but have been found to struggle at out-of-distribution compositional generalization.
Text-to-SQL is a crucial task toward developing methods for understanding natural language by computers.
Semantic parsing has long been a fundamental problem in natural language processing.
We present BRIDGE, a powerful sequential architecture for modeling dependencies between natural language questions and relational databases in cross-DB semantic parsing.
The dynamic schema-state and SQL-state representations are then utilized to decode the SQL query corresponding to current utterance.
Despite recent success in neural task-oriented dialogue systems, developing such a real-world system involves accessing large-scale knowledge bases (KBs), which cannot be simply encoded by neural approaches, such as memory network mechanisms.
Our model outperforms previous state-of-the-art model by a large margin and achieves new state-of-the-art results on the two datasets.
In Natural Language Interfaces to Databases systems, the text-to-SQL technique allows users to query databases by using natural language questions.
We explore using T5 (Raffel et al. (2019)) to directly translate natural language questions into SQL statements.