Search Results for author: Panupong Pasupat

Found 31 papers, 16 papers with code

Compositional Semantic Parsing on Semi-Structured Tables

4 code implementations IJCNLP 2015 Panupong Pasupat, Percy Liang

Two important aspects of semantic parsing for question answering are the breadth of the knowledge source and the depth of logical compositionality.

Question Answering Semantic Parsing

Simpler Context-Dependent Logical Forms via Model Projections

1 code implementation ACL 2016 Reginald Long, Panupong Pasupat, Percy Liang

With only denotations at training time, we must search over a combinatorially large space of logical forms, which is even larger with context-dependent utterances.

Semantic Parsing

Inferring Logical Forms From Denotations

2 code implementations ACL 2016 Panupong Pasupat, Percy Liang

A core problem in learning semantic parsers from denotations is picking out consistent logical forms--those that yield the correct denotation--from a combinatorially large space.

From Language to Programs: Bridging Reinforcement Learning and Maximum Marginal Likelihood

3 code implementations ACL 2017 Kelvin Guu, Panupong Pasupat, Evan Zheran Liu, Percy Liang

Our goal is to learn a semantic parser that maps natural language utterances into executable programs when only indirect supervision is available: examples are labeled with the correct execution result, but not the program itself.

reinforcement-learning Reinforcement Learning (RL) +1

Macro Grammars and Holistic Triggering for Efficient Semantic Parsing

2 code implementations EMNLP 2017 Yuchen Zhang, Panupong Pasupat, Percy Liang

To learn a semantic parser from denotations, a learning algorithm must search over a combinatorially large space of logical forms for ones consistent with the annotated denotations.

Semantic Parsing Sentence +1

Reinforcement Learning on Web Interfaces Using Workflow-Guided Exploration

4 code implementations ICLR 2018 Evan Zheran Liu, Kelvin Guu, Panupong Pasupat, Tianlin Shi, Percy Liang

Reinforcement learning (RL) agents improve through trial-and-error, but when reward is sparse and the agent cannot discover successful action sequences, learning stagnates.

reinforcement-learning Reinforcement Learning (RL)

Improving Semantic Parsing for Task Oriented Dialog

no code implementations15 Feb 2019 Arash Einolghozati, Panupong Pasupat, Sonal Gupta, Rushin Shah, Mrinal Mohit, Mike Lewis, Luke Zettlemoyer

Semantic parsing using hierarchical representations has recently been proposed for task oriented dialog with promising results [Gupta et al 2018].

Language Modelling Re-Ranking +1

Learning Abstract Models for Long-Horizon Exploration

no code implementations ICLR 2019 Evan Zheran Liu, Ramtin Keramati, Sudarshan Seshadri, Kelvin Guu, Panupong Pasupat, Emma Brunskill, Percy Liang

In our approach, a manager maintains an abstract MDP over a subset of the abstract states, which grows monotonically through targeted exploration (possible due to the abstract MDP).

Atari Games

SPoC: Search-based Pseudocode to Code

1 code implementation NeurIPS 2019 Sumith Kulal, Panupong Pasupat, Kartik Chandra, Mina Lee, Oded Padon, Alex Aiken, Percy Liang

Given test cases as a mechanism to validate programs, we search over the space of possible translations of the pseudocode to find a program that passes the validation.

Program Synthesis Translation

Span-based Hierarchical Semantic Parsing for Task-Oriented Dialog

no code implementations IJCNLP 2019 Panupong Pasupat, Sonal Gupta, M, Karishma yam, Rushin Shah, Mike Lewis, Luke Zettlemoyer

We propose a semantic parser for parsing compositional utterances into Task Oriented Parse (TOP), a tree representation that has intents and slots as labels of nesting tree nodes.

Semantic Parsing valid

REALM: Retrieval-Augmented Language Model Pre-Training

6 code implementations10 Feb 2020 Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, Ming-Wei Chang

Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answering.

Language Modelling Masked Language Modeling +3

Learning Abstract Models for Strategic Exploration and Fast Reward Transfer

1 code implementation12 Jul 2020 Evan Zheran Liu, Ramtin Keramati, Sudarshan Seshadri, Kelvin Guu, Panupong Pasupat, Emma Brunskill, Percy Liang

Model-based reinforcement learning (RL) is appealing because (i) it enables planning and thus more strategic exploration, and (ii) by decoupling dynamics from rewards, it enables fast transfer to new reward functions.

Model-based Reinforcement Learning Montezuma's Revenge +2

Compositional Generalization and Natural Language Variation: Can a Semantic Parsing Approach Handle Both?

1 code implementation ACL 2021 Peter Shaw, Ming-Wei Chang, Panupong Pasupat, Kristina Toutanova

This has motivated new specialized architectures with stronger compositional biases, but most of these approaches have only been evaluated on synthetically-generated datasets, which are not representative of natural language variation.

Semantic Parsing

Few-shot Intent Classification and Slot Filling with Retrieved Examples

no code implementations NAACL 2021 Dian Yu, Luheng He, Yuan Zhang, Xinya Du, Panupong Pasupat, Qi Li

Few-shot learning arises in important practical scenarios, such as when a natural language understanding system needs to learn new semantic labels for an emerging, resource-scarce domain.

Classification Few-Shot Learning +8

Unlocking Compositional Generalization in Pre-trained Models Using Intermediate Representations

2 code implementations15 Apr 2021 Jonathan Herzig, Peter Shaw, Ming-Wei Chang, Kelvin Guu, Panupong Pasupat, Yuan Zhang

Sequence-to-sequence (seq2seq) models are prevalent in semantic parsing, but have been found to struggle at out-of-distribution compositional generalization.

Semantic Parsing Text-To-SQL

QA-Driven Zero-shot Slot Filling with Weak Supervision Pretraining

no code implementations ACL 2021 Xinya Du, Luheng He, Qi Li, Dian Yu, Panupong Pasupat, Yuan Zhang

To address this problem, we introduce QA-driven slot filling (QASF), which extracts slot-filler spans from utterances with a span-based QA model.

slot-filling Zero-shot Slot Filling

Graph-Based Decoding for Task Oriented Semantic Parsing

no code implementations Findings (EMNLP) 2021 Jeremy R. Cole, Nanjiang Jiang, Panupong Pasupat, Luheng He, Peter Shaw

The dominant paradigm for semantic parsing in recent years is to formulate parsing as a sequence-to-sequence task, generating predictions with auto-regressive sequence decoders.

Dependency Parsing Semantic Parsing

Controllable Semantic Parsing via Retrieval Augmentation

1 code implementation EMNLP 2021 Panupong Pasupat, Yuan Zhang, Kelvin Guu

In practical applications of semantic parsing, we often want to rapidly change the behavior of the parser, such as enabling it to handle queries in a new domain, or changing its predictions on certain targeted queries.

Retrieval Semantic Parsing

Meta-Learning Fast Weight Language Models

no code implementations5 Dec 2022 Kevin Clark, Kelvin Guu, Ming-Wei Chang, Panupong Pasupat, Geoffrey Hinton, Mohammad Norouzi

Dynamic evaluation of language models (LMs) adapts model parameters at test time using gradient information from previous tokens and substantially improves LM performance.

Language Modelling Meta-Learning

Dr.ICL: Demonstration-Retrieved In-context Learning

no code implementations23 May 2023 Man Luo, Xin Xu, Zhuyun Dai, Panupong Pasupat, Mehran Kazemi, Chitta Baral, Vaiva Imbrasaite, Vincent Y Zhao

In-context learning (ICL), teaching a large language model (LLM) to perform a task with few-shot demonstrations rather than adjusting the model parameters, has emerged as a strong paradigm for using LLMs.

In-Context Learning Language Modelling +2

PURR: Efficiently Editing Language Model Hallucinations by Denoising Language Model Corruptions

no code implementations24 May 2023 Anthony Chen, Panupong Pasupat, Sameer Singh, Hongrae Lee, Kelvin Guu

These bottlenecks motivate the training of compact editors, which is challenging due to the scarcity of training data for this purpose.

Denoising Language Modelling

From Pixels to UI Actions: Learning to Follow Instructions via Graphical User Interfaces

1 code implementation NeurIPS 2023 Peter Shaw, Mandar Joshi, James Cohan, Jonathan Berant, Panupong Pasupat, Hexiang Hu, Urvashi Khandelwal, Kenton Lee, Kristina Toutanova

Much of the previous work towards digital agents for graphical user interfaces (GUIs) has relied on text-based representations (derived from HTML or other structured data sources), which are not always readily available.

Instruction Following

Large Language Models as Analogical Reasoners

no code implementations3 Oct 2023 Michihiro Yasunaga, Xinyun Chen, Yujia Li, Panupong Pasupat, Jure Leskovec, Percy Liang, Ed H. Chi, Denny Zhou

Chain-of-thought (CoT) prompting for language models demonstrates impressive performance across reasoning tasks, but typically needs labeled exemplars of the reasoning process.

Code Generation GSM8K +1

In-context Learning with Retrieved Demonstrations for Language Models: A Survey

no code implementations21 Jan 2024 Man Luo, Xin Xu, Yue Liu, Panupong Pasupat, Mehran Kazemi

Language models, especially pre-trained large language models, have showcased remarkable abilities as few-shot in-context learners (ICL), adept at adapting to new tasks with just a few demonstrations in the input context.

In-Context Learning Retrieval

Retrieval Augmented Language Model Pre-Training

no code implementations ICML 2020 Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, Ming-Wei Chang

Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answering.

Language Modelling Masked Language Modeling +3

Cannot find the paper you are looking for? You can Submit a new open access paper.