no code implementations • 24 Mar 2024 • Mohammadreza Pourreza, Davood Rafiei, Yuxi Feng, Raymond Li, Zhenan Fan, Weiwei Zhang
Furthermore, compared to these competitive models, our proposed encoder enhances the downstream performance of NL2SQL models in 1-shot in-context learning scenarios by 1-2\% for GPT-3. 5-turbo, 4-8\% for CodeLlama-7B, and 2-3\% for CodeLlama-13B.
no code implementations • 2 Feb 2024 • Mohammadreza Pourreza, Davood Rafiei
Leading models for the text-to-SQL task heavily rely on proprietary Large Language Models (LLMs), posing concerns over data privacy.
no code implementations • 27 Oct 2023 • Mohammadreza Pourreza, Davood Rafiei
In this paper, we conduct an extensive study of several prominent cross-domain text-to-SQL benchmarks and re-evaluate some of the top-performing models within these benchmarks, by both manually evaluating the SQL queries and rewriting them in equivalent expressions.
1 code implementation • NeurIPS 2023 • Mohammadreza Pourreza, Davood Rafiei
In particular, we show that breaking down the generation problem into sub-problems and feeding the solutions of those sub-problems into LLMs can be an effective approach for significantly improving their performance.
Ranked #3 on Text-To-SQL on spider