no code implementations • ICML 2020 • Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, Ming-Wei Chang
Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answering.
no code implementations • 24 May 2022 • Linlu Qiu, Peter Shaw, Panupong Pasupat, Tianze Shi, Jonathan Herzig, Emily Pitler, Fei Sha, Kristina Toutanova
Meanwhile, recent work has shown considerable improvements on many NLP tasks from model scaling.
1 code implementation • NAACL 2022 • Linlu Qiu, Peter Shaw, Panupong Pasupat, Paweł Krzysztof Nowak, Tal Linzen, Fei Sha, Kristina Toutanova
Generic unstructured neural networks have been shown to struggle on out-of-distribution compositional generalization.
1 code implementation • EMNLP 2021 • Panupong Pasupat, Yuan Zhang, Kelvin Guu
In practical applications of semantic parsing, we often want to rapidly change the behavior of the parser, such as enabling it to handle queries in a new domain, or changing its predictions on certain targeted queries.
no code implementations • Findings (EMNLP) 2021 • Jeremy R. Cole, Nanjiang Jiang, Panupong Pasupat, Luheng He, Peter Shaw
The dominant paradigm for semantic parsing in recent years is to formulate parsing as a sequence-to-sequence task, generating predictions with auto-regressive sequence decoders.
no code implementations • ACL 2021 • Xinya Du, Luheng He, Qi Li, Dian Yu, Panupong Pasupat, Yuan Zhang
To address this problem, we introduce QA-driven slot filling (QASF), which extracts slot-filler spans from utterances with a span-based QA model.
1 code implementation • 15 Apr 2021 • Jonathan Herzig, Peter Shaw, Ming-Wei Chang, Kelvin Guu, Panupong Pasupat, Yuan Zhang
Sequence-to-sequence (seq2seq) models are prevalent in semantic parsing, but have been found to struggle at out-of-distribution compositional generalization.
no code implementations • NAACL 2021 • Dian Yu, Luheng He, Yuan Zhang, Xinya Du, Panupong Pasupat, Qi Li
Few-shot learning arises in important practical scenarios, such as when a natural language understanding system needs to learn new semantic labels for an emerging, resource-scarce domain.
1 code implementation • ACL 2021 • Peter Shaw, Ming-Wei Chang, Panupong Pasupat, Kristina Toutanova
This has motivated new specialized architectures with stronger compositional biases, but most of these approaches have only been evaluated on synthetically-generated datasets, which are not representative of natural language variation.
1 code implementation • 12 Jul 2020 • Evan Zheran Liu, Ramtin Keramati, Sudarshan Seshadri, Kelvin Guu, Panupong Pasupat, Emma Brunskill, Percy Liang
Model-based reinforcement learning (RL) is appealing because (i) it enables planning and thus more strategic exploration, and (ii) by decoupling dynamics from rewards, it enables fast transfer to new reward functions.
5 code implementations • 10 Feb 2020 • Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, Ming-Wei Chang
Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answering.
Ranked #5 on
Question Answering
on WebQuestions
no code implementations • IJCNLP 2019 • Panupong Pasupat, Sonal Gupta, M, Karishma yam, Rushin Shah, Mike Lewis, Luke Zettlemoyer
We propose a semantic parser for parsing compositional utterances into Task Oriented Parse (TOP), a tree representation that has intents and slots as labels of nesting tree nodes.
no code implementations • NeurIPS 2019 • Sumith Kulal, Panupong Pasupat, Kartik Chandra, Mina Lee, Oded Padon, Alex Aiken, Percy Liang
Given test cases as a mechanism to validate programs, we search over the space of possible translations of the pseudocode to find a program that passes the validation.
Ranked #2 on
Program Synthesis
on SPoC TestP
no code implementations • ICLR 2019 • Evan Zheran Liu, Ramtin Keramati, Sudarshan Seshadri, Kelvin Guu, Panupong Pasupat, Emma Brunskill, Percy Liang
In our approach, a manager maintains an abstract MDP over a subset of the abstract states, which grows monotonically through targeted exploration (possible due to the abstract MDP).
no code implementations • 15 Feb 2019 • Arash Einolghozati, Panupong Pasupat, Sonal Gupta, Rushin Shah, Mrinal Mohit, Mike Lewis, Luke Zettlemoyer
Semantic parsing using hierarchical representations has recently been proposed for task oriented dialog with promising results [Gupta et al 2018].
2 code implementations • EMNLP 2018 • Panupong Pasupat, Tian-Shun Jiang, Evan Zheran Liu, Kelvin Guu, Percy Liang
The web provides a rich, open-domain environment with textual, structural, and spatial properties.
3 code implementations • ICLR 2018 • Evan Zheran Liu, Kelvin Guu, Panupong Pasupat, Tianlin Shi, Percy Liang
Reinforcement learning (RL) agents improve through trial-and-error, but when reward is sparse and the agent cannot discover successful action sequences, learning stagnates.
2 code implementations • EMNLP 2017 • Yuchen Zhang, Panupong Pasupat, Percy Liang
To learn a semantic parser from denotations, a learning algorithm must search over a combinatorially large space of logical forms for ones consistent with the annotated denotations.
3 code implementations • ACL 2017 • Kelvin Guu, Panupong Pasupat, Evan Zheran Liu, Percy Liang
Our goal is to learn a semantic parser that maps natural language utterances into executable programs when only indirect supervision is available: examples are labeled with the correct execution result, but not the program itself.
2 code implementations • ACL 2016 • Panupong Pasupat, Percy Liang
A core problem in learning semantic parsers from denotations is picking out consistent logical forms--those that yield the correct denotation--from a combinatorially large space.
1 code implementation • ACL 2016 • Reginald Long, Panupong Pasupat, Percy Liang
With only denotations at training time, we must search over a combinatorially large space of logical forms, which is even larger with context-dependent utterances.
3 code implementations • IJCNLP 2015 • Panupong Pasupat, Percy Liang
Two important aspects of semantic parsing for question answering are the breadth of the knowledge source and the depth of logical compositionality.