no code implementations • EMNLP (ACL) 2021 • Alane Suhr, Clara Vania, Nikita Nangia, Maarten Sap, Mark Yatskar, Samuel R. Bowman, Yoav Artzi
Even though it is such a fundamental tool in NLP, crowdsourcing use is largely guided by common practices and the personal experience of researchers.
no code implementations • 28 Jan 2023 • Kolby Nottingham, Prithviraj Ammanabrolu, Alane Suhr, Yejin Choi, Hannaneh Hajishirzi, Sameer Singh, Roy Fox
Reinforcement learning (RL) agents typically learn tabula rasa, without prior knowledge of the world, which makes learning complex tasks with sparse rewards difficult.
no code implementations • 19 Dec 2022 • Alane Suhr, Yoav Artzi
We study the problem of continually training an instruction-following agent through feedback provided by users during collaborative interactions.
no code implementations • 29 Nov 2022 • Anya Ji, Noriyuki Kojima, Noah Rush, Alane Suhr, Wai Keen Vong, Robert D. Hawkins, Yoav Artzi
We introduce KiloGram, a resource for studying abstract visual reasoning in humans and machines.
1 code implementation • Findings (EMNLP) 2021 • Anna Effenberger, Eva Yan, Rhia Singh, Alane Suhr, Yoav Artzi
We analyze language change over time in a collaborative, goal-oriented instructional task, where utility-maximizing participants form conventions and increase their expertise.
no code implementations • 10 Aug 2021 • Noriyuki Kojima, Alane Suhr, Yoav Artzi
We study continual learning for natural language instruction generation, by observing human users' instruction execution.
no code implementations • ACL 2020 • Alane Suhr, Ming-Wei Chang, Peter Shaw, Kenton Lee
We study the task of cross-database semantic parsing (XSP), where a system that maps natural language utterances to executable SQL queries is evaluated on databases unseen during training.
no code implementations • IJCNLP 2019 • Alane Suhr, Claudia Yan, Charlotte Schluger, Stanley Yu, Hadi Khader, Marwa Mouallem, Iris Zhang, Yoav Artzi
We study a collaborative scenario where a user not only instructs a system to complete tasks, but also acts alongside it.
1 code implementation • 23 Sep 2019 • Alane Suhr, Yoav Artzi
We show that the performance of existing models (Li et al., 2019; Tan and Bansal 2019) is relatively robust to this potential bias.
4 code implementations • CVPR 2019 • Howard Chen, Alane Suhr, Dipendra Misra, Noah Snavely, Yoav Artzi
We study the problem of jointly reasoning about language and vision through a navigation and spatial reasoning task.
Ranked #10 on
Vision and Language Navigation
on Touchdown Dataset
2 code implementations • ACL 2019 • Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, Yoav Artzi
We crowdsource the data using sets of visually rich images and a compare-and-contrast task to elicit linguistically diverse language.
no code implementations • ACL 2018 • Matt Gardner, Pradeep Dasigi, Srinivasan Iyer, Alane Suhr, Luke Zettlemoyer
Semantic parsing, the study of translating natural language utterances into machine-executable programs, is a well-established research area and has applications in question answering, instruction following, voice assistants, and code generation.
1 code implementation • ACL 2018 • Alane Suhr, Yoav Artzi
We propose a learning approach for mapping context-dependent sequential instructions to actions.
1 code implementation • NAACL 2018 • Alane Suhr, Srinivasan Iyer, Yoav Artzi
We propose a context-dependent model to map utterances within an interaction to executable formal queries.
no code implementations • 2 Oct 2017 • Stephanie Zhou, Alane Suhr, Yoav Artzi
To understand language in complex environments, agents must reason about the full range of language inputs and their correspondence to the world.
no code implementations • ACL 2017 • Alane Suhr, Mike Lewis, James Yeh, Yoav Artzi
We present a new visual reasoning language dataset, containing 92, 244 pairs of examples of natural statements grounded in synthetic images with 3, 962 unique sentences.