( Image credit: SyntaxSQLNet )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Recent years have witnessed the burgeoning of pretrained language models (LMs) for text-based natural language (NL) understanding tasks.
We study the task of semantic parse correction with natural language feedback.
Text-to-SQL is the problem of converting a user question into an SQL query, when the question and database are given.
We demonstrate the effectiveness of the re-ranker by applying it to two state-of-the-art text-to-SQL models, and achieve top 4 score on the Spider leaderboard at the time of writing this article.
In addition, we observe qualitative improvements in the model’s understanding of schema linking and alignment.
The generalization challenge lies in (a) encoding the database relations in an accessible way for the semantic parser, and (b) modeling alignment between database columns and their mentions in a given query.
Semantic parsing is a fundamental problem in natural language understanding, as it involves the mapping of natural language to structured forms such as executable queries or logic-like knowledge representations.
We present CoSQL, a corpus for building cross-domain, general-purpose database (DB) querying dialogue systems.
One key component in text-to-SQL is to predict the comparison relations between columns and their values.