Natural Language to Structured Query Generation via Meta-Learning

In conventional supervised training, a model is trained to fit all the training examples. However, having a monolithic model may not always be the best strategy, as examples could vary widely. In this work, we explore a different learning protocol that treats each example as a unique pseudo-task, by reducing the original learning problem to a few-shot meta-learning scenario with the help of a domain-dependent relevance function. When evaluated on the WikiSQL dataset, our approach leads to faster convergence and achieves 1.1%-5.4% absolute accuracy gains over the non-meta-learning counterparts.

PDF Abstract NAACL 2018 PDF NAACL 2018 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Code Generation WikiSQL PT-MAML (Huang et al., 2018) Execution Accuracy 68.0 # 7
Exact Match Accuracy 62.8 # 4

Methods


No methods listed for this paper. Add relevant methods here