Search Results for author: Roma Patel

Found 17 papers, 5 papers with code

States as Strings as Strategies: Steering Language Models with Game-Theoretic Solvers

1 code implementation24 Jan 2024 Ian Gemp, Yoram Bachrach, Marc Lanctot, Roma Patel, Vibhavari Dasagi, Luke Marris, Georgios Piliouras, SiQi Liu, Karl Tuyls

A suitable model of the players, strategies, and payoffs associated with linguistic interactions (i. e., a binding to the conventional symbolic logic of game theory) would enable existing game-theoretic algorithms to provide strategic solutions in the space of language.

Imitation Learning

Pragmatics in Language Grounding: Phenomena, Tasks, and Modeling Approaches

no code implementations15 Nov 2022 Daniel Fried, Nicholas Tomlin, Jennifer Hu, Roma Patel, Aida Nematzadeh

People rely heavily on context to enrich meaning beyond what is literally said, enabling concise but effective communication.

Grounded language learning

RLang: A Declarative Language for Describing Partial World Knowledge to Reinforcement Learning Agents

no code implementations12 Aug 2022 Rafael Rodriguez-Sanchez, Benjamin A. Spiegel, Jennifer Wang, Roma Patel, Stefanie Tellex, George Konidaris

We define precise syntax and grounding semantics for RLang, and provide a parser that grounds RLang programs to an algorithm-agnostic \textit{partial} world model and policy that can be exploited by an RL agent.

Decision Making reinforcement-learning +2

Generalizing to New Domains by Mapping Natural Language to Lifted LTL

no code implementations11 Oct 2021 Eric Hsiung, Hiloni Mehta, Junchi Chu, Xinyu Liu, Roma Patel, Stefanie Tellex, George Konidaris

We compare our method of mapping natural language task specifications to intermediate contextual queries against state-of-the-art CopyNet models capable of translating natural language to LTL, by evaluating whether correct LTL for manipulation and navigation task specifications can be output, and show that our method outperforms the CopyNet model on unseen object references.

Mapping Language Models to Grounded Conceptual Spaces

no code implementations ICLR 2022 Roma Patel, Ellie Pavlick

A fundamental criticism of text-only language models (LMs) is their lack of grounding---that is, the ability to tie a word for which they have learned a representation, to its actual use in the world.

Robot Object Retrieval with Contextual Natural Language Queries

1 code implementation23 Jun 2020 Thao Nguyen, Nakul Gopalan, Roma Patel, Matt Corsaro, Ellie Pavlick, Stefanie Tellex

The model takes in a language command containing a verb, for example "Hand me something to cut," and RGB images of candidate objects and selects the object that best satisfies the task specified by the verb.

Natural Language Queries Object +1

Planning with State Abstractions for Non-Markovian Task Specifications

2 code implementations28 May 2019 Yoonseon Oh, Roma Patel, Thao Nguyen, Baichuan Huang, Ellie Pavlick, Stefanie Tellex

Often times, we specify tasks for a robot using temporal language that can also span different levels of abstraction.

Looking for ELMo's friends: Sentence-Level Pretraining Beyond Language Modeling

no code implementations ICLR 2019 Samuel R. Bowman, Ellie Pavlick, Edouard Grave, Benjamin Van Durme, Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen

Work on the problem of contextualized word representation—the development of reusable neural network components for sentence understanding—has recently seen a surge of progress centered on the unsupervised pretraining task of language modeling with methods like ELMo (Peters et al., 2018).

Language Modelling Sentence

Probing What Different NLP Tasks Teach Machines about Function Word Comprehension

no code implementations SEMEVAL 2019 Najoung Kim, Roma Patel, Adam Poliak, Alex Wang, Patrick Xia, R. Thomas McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, Ellie Pavlick

Our results show that pretraining on language modeling performs the best on average across our probing tasks, supporting its widespread use for pretraining state-of-the-art NLP models, and CCG supertagging and NLI pretraining perform comparably.

CCG Supertagging Language Modelling +3

Syntactic Patterns Improve Information Extraction for Medical Search

no code implementations NAACL 2018 Roma Patel, Yinfei Yang, Iain Marshall, Ani Nenkova, Byron Wallace

Medical professionals search the published literature by specifying the type of patients, the medical intervention(s) and the outcome measure(s) of interest.

Cannot find the paper you are looking for? You can Submit a new open access paper.