no code implementations • 8 Mar 2024 • Kunal Handa, Yarin Gal, Ellie Pavlick, Noah Goodman, Jacob Andreas, Alex Tamkin, Belinda Z. Li
We introduce OPEN (Optimal Preference Elicitation with Natural language) a framework that uses BOED to guide the choice of informative questions and an LM to extract features and translate abstract BOED queries into natural language questions.
1 code implementation • 6 Mar 2024 • Rose E. Wang, Pawan Wirawarn, Omar Khattab, Noah Goodman, Dorottya Demszky
While information retrieval (IR) systems may provide answers for such user queries, they do not directly assist content creators -- such as lecturers who want to improve their content -- identify segments that _caused_ a user to ask those questions.
1 code implementation • 17 Oct 2023 • Belinda Z. Li, Alex Tamkin, Noah Goodman, Jacob Andreas
Language models (LMs) can be directed to perform target tasks by using labeled examples or natural language prompts.
1 code implementation • 15 Jun 2023 • Rose E. Wang, Pawan Wirawarn, Noah Goodman, Dorottya Demszky
To overcome this challenge, we propose a set of best practices for using large language models (LLMs) to cheaply classify the comments at scale.
1 code implementation • 12 Jun 2023 • Megha Srivastava, Noah Goodman, Dorsa Sadigh
AI assistance continues to help advance applications in education, from language learning to intelligent tutoring systems, yet current methods for providing students feedback are still quite limited.
1 code implementation • NeurIPS 2023 • Jesse Mu, Xiang Lisa Li, Noah Goodman
Prompting is the primary way to utilize the multitask capabilities of language models (LMs), but prompts occupy valuable space in the input context window, and repeatedly encoding the same prompt is computationally inefficient.
1 code implementation • 11 Feb 2023 • Jasmine Bayrooti, Noah Goodman, Alex Tamkin
Contrastive learning methods have been applied to a range of domains and modalities by training models to identify similar "views" of data points.
no code implementations • 20 Dec 2022 • Alex Tamkin, Kunal Handa, Avash Shrestha, Noah Goodman
We investigate how both humans and models behave in the face of such task ambiguity by proposing AmbiBench, a new benchmark of six ambiguously-specified classification tasks.
1 code implementation • 25 Nov 2022 • Megha Srivastava, Erdem Biyik, Suvir Mirchandani, Noah Goodman, Dorsa Sadigh
In this paper, we focus on the problem of assistive teaching of motor control tasks such as parking a car or landing an aircraft.
1 code implementation • 16 Nov 2022 • Zhening Li, Gabriel Poesia, Omar Costilla-Reyes, Noah Goodman, Armando Solar-Lezama
Humans tame the complexity of mathematical reasoning by developing hierarchies of abstractions.
no code implementations • 19 May 2022 • Mike Wu, Noah Goodman
Given a probabilistic program, we are interested in the task of posterior inference: estimating a latent variable given a set of observed variables.
1 code implementation • 3 May 2022 • Julia White, Noah Goodman, Robert Hawkins
Language use differs dramatically from context to context.
no code implementations • 26 Apr 2022 • Rose E. Wang, Mike Wu, Noah Goodman
The teacher must interact and diagnose the student, before teaching.
1 code implementation • 18 Apr 2022 • Alex Tamkin, Dat Nguyen, Salil Deshpande, Jesse Mu, Noah Goodman
Models can fail in unpredictable ways during deployment due to task ambiguity, when multiple behaviors are consistent with the provided training data.
1 code implementation • ICLR 2022 • Rose E Wang, Esin Durmus, Noah Goodman, Tatsunori Hashimoto
Recent work in self-supervised learning suggests that models can learn good latent representations via contrastive learning, which can be effective for discriminative tasks.
1 code implementation • 17 Feb 2022 • Jesse Mu, Victor Zhong, Roberta Raileanu, Minqi Jiang, Noah Goodman, Tim Rocktäschel, Edward Grefenstette
Reinforcement learning (RL) agents are particularly hard to train when rewards are sparse.
no code implementations • 10 Dec 2021 • Ananya Karthik, Mike Wu, Noah Goodman, Alex Tamkin
Contrastive learning has made considerable progress in computer vision, outperforming supervised pretraining on a range of downstream datasets.
1 code implementation • 23 Nov 2021 • Alex Tamkin, Vincent Liu, Rongfei Lu, Daniel Fein, Colin Schultz, Noah Goodman
Self-supervised learning algorithms, including BERT and SimCLR, have enabled significant strides in fields like natural language processing, computer vision, and speech processing.
Ranked #1 on Self-Supervised Learning on DABS
no code implementations • EMNLP 2021 • Julia White, Gabriel Poesia, Robert Hawkins, Dorsa Sadigh, Noah Goodman
An overarching goal of natural language processing is to enable machines to communicate seamlessly with humans.
1 code implementation • 8 Oct 2021 • Oliver Zhang, Mike Wu, Jasmine Bayrooti, Noah Goodman
In this paper, we propose a simple way to generate uncertainty scores for many contrastive methods by re-purposing temperature, a mysterious hyperparameter used for scaling.
no code implementations • 29 Sep 2021 • Alex Tamkin, Dat Nguyen, Salil Deshpande, Jesse Mu, Noah Goodman
An important barrier to the safe deployment of machine learning systems is the risk of \emph{task ambiguity}, where multiple behaviors are consistent with the provided examples.
no code implementations • 26 Aug 2021 • Mike Wu, Richard L. Davis, Benjamin W. Domingue, Chris Piech, Noah Goodman
Item Response Theory (IRT) is a ubiquitous model for understanding human behaviors and attitudes based on their responses to questions.
2 code implementations • 16 Aug 2021 • Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, aditi raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang
AI is undergoing a paradigm shift with the rise of models (e. g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks.
1 code implementation • 23 Jul 2021 • Mike Wu, Noah Goodman, Chris Piech, Chelsea Finn
High-quality computer science education is limited by the difficulty of providing instructor feedback to students at scale.
1 code implementation • NeurIPS 2021 • Gabriel Poesia, WenXin Dong, Noah Goodman
Our results suggest new directions for reinforcement learning in symbolic domains, as well as applications to mathematics education.
1 code implementation • ACL 2021 • Megha Srivastava, Noah Goodman
Intelligent and adaptive online education systems aim to make high-quality education available for a diverse range of students.
1 code implementation • NeurIPS 2021 • Jesse Mu, Noah Goodman
To build agents that can collaborate effectively with others, recent research has trained artificial agents to communicate with each other in Lewis-style referential games.
no code implementations • NeurIPS 2021 • Mike Wu, Noah Goodman, Stefano Ermon
In traditional software programs, it is easy to trace program logic from variables back to input, apply assertion statements to block erroneous behavior, and compose programs together.
1 code implementation • NeurIPS 2020 • Alex Tamkin, Dan Jurafsky, Noah Goodman
Language exhibits structure at different scales, ranging from subwords to words, sentences, paragraphs, and documents.
1 code implementation • ICLR 2021 • Alex Tamkin, Mike Wu, Noah Goodman
However, designing these views requires considerable trial and error by human experts, hindering widespread adoption of unsupervised representation learning methods across domains and modalities.
1 code implementation • ICLR 2021 • Mike Wu, Milan Mosse, Chengxu Zhuang, Daniel Yamins, Noah Goodman
To do this, we introduce a family of mutual information estimators that sample negatives conditionally -- in a "ring" around each positive.
no code implementations • 5 Oct 2020 • Mike Wu, Noah Goodman
Contrastive approaches to representation learning have recently shown great promise.
no code implementations • 27 May 2020 • Mike Wu, Chengxu Zhuang, Milan Mosse, Daniel Yamins, Noah Goodman
Reformulating previous learning objectives in terms of mutual information also simplifies and stabilizes them.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Alex Tamkin, Trisha Singh, Davide Giovanardi, Noah Goodman
How does language model pretraining help transfer learning?
1 code implementation • 1 Feb 2020 • Mike Wu, Richard L. Davis, Benjamin W. Domingue, Chris Piech, Noah Goodman
Item Response Theory (IRT) is a ubiquitous model for understanding humans based on their responses to questions, used in fields as diverse as education, medicine and psychology.
no code implementations • 11 Dec 2019 • Mike Wu, Noah Goodman
As part of our derivation we find that many previous multimodal variational autoencoders used objectives that do not correctly bound the joint marginal likelihood across modalities.
2 code implementations • ACL 2020 • Jesse Mu, Percy Liang, Noah Goodman
By describing the features and abstractions of our world, language is a crucial tool for human learning and a promising source of supervision for machine learning models.
1 code implementation • ACL 2019 • Allen Nie, Erin Bennett, Noah Goodman
Learning effective representations of sentences is one of the core missions of natural language understanding.
no code implementations • ACL 2019 • Bill McDowell, Noah Goodman
Pragmatic reasoning allows humans to go beyond the literal meaning when interpret- ing language in context.
1 code implementation • 23 May 2019 • Ali Malik, Mike Wu, Vrinda Vasavada, Jinpeng Song, Madison Coots, John Mitchell, Noah Goodman, Chris Piech
In this paper, we present generative grading: a novel computational approach for providing feedback at scale that is capable of accurately grading student work and providing nuanced, interpretable feedback.
1 code implementation • NeurIPS 2019 • Adam Foster, Martin Jankowiak, Eli Bingham, Paul Horsfall, Yee Whye Teh, Tom Rainforth, Noah Goodman
Bayesian optimal experimental design (BOED) is a principled framework for making efficient use of limited experimental resources.
1 code implementation • 11 Mar 2019 • Judith Fan, Robert Hawkins, Mike Wu, Noah Goodman
On each trial, both participants were shown the same four objects, but in different locations.
1 code implementation • NAACL 2019 • Reuben Cohn-Gordon, Noah Goodman
A desideratum of high-quality translation systems is that they preserve meaning, in the sense that two sentences with different meanings should not translate to one and the same sentence in another language.
no code implementations • 8 Feb 2019 • Fritz Obermeyer, Eli Bingham, Martin Jankowiak, Justin Chiu, Neeraj Pradhan, Alexander Rush, Noah Goodman
To exploit efficient tensor algebra in graphs with plates of variables, we generalize undirected factor graphs to plated factor graphs and variable elimination to a tensor variable elimination algorithm that operates directly on plated factor graphs.
1 code implementation • 5 Feb 2019 • Mike Wu, Kristy Choi, Noah Goodman, Stefano Ermon
Despite the recent success in probabilistic modeling and their applications, generative models trained using traditional inference techniques struggle to adapt to new distributions, even when the target distribution may be closely related to the ones seen during training.
2 code implementations • NeurIPS 2018 • Shengjia Zhao, Hongyu Ren, Arianna Yuan, Jiaming Song, Noah Goodman, Stefano Ermon
In high dimensional settings, density estimation algorithms rely crucially on their inductive bias.
1 code implementation • 5 Oct 2018 • Mike Wu, Noah Goodman, Stefano Ermon
Stochastic optimization techniques are standard in variational inference algorithms.
1 code implementation • 5 Sep 2018 • Mike Wu, Milan Mosse, Noah Goodman, Chris Piech
Rubric sampling requires minimal teacher effort, can associate feedback with specific parts of a student's solution and can articulate a student's misconceptions in the language of the instructor.
no code implementations • NAACL 2018 • Reuben Cohn-Gordon, Noah Goodman, Christopher Potts
We combine a neural image captioner with a Rational Speech Acts (RSA) model to make a system that is pragmatically informative: its objective is to produce captions that are not merely true but also distinguish their inputs from similar images.
3 code implementations • NeurIPS 2018 • Mike Wu, Noah Goodman
Multiple modalities often co-occur when describing natural phenomena.
no code implementations • 17 Aug 2016 • Long Ouyang, Michael Henry Tessler, Daniel Ly, Noah Goodman
PPLs offer a clean separation between declaring problems and solving them, which means that the scientist can automate experiment design by simply declaring her model and experiment spaces in the PPL without having to worry about the details of calculating information gain.
no code implementations • NeurIPS 2013 • Nathaniel J. Smith, Noah Goodman, Michael Frank
Language users are remarkably good at making inferences about speakers' intentions in context, and children learning their native language also display substantial skill in acquiring the meanings of unknown words.
no code implementations • NeurIPS 2013 • Andreas Stuhlmüller, Jacob Taylor, Noah Goodman
We describe a class of algorithms for amortized inference in Bayesian networks.
no code implementations • NeurIPS 2012 • Falk Lieder, Tom Griffiths, Noah Goodman
Therefore minds and machines have to approximate Bayesian inference.
no code implementations • 13 Jun 2012 • Noah Goodman, Vikash Mansinghka, Daniel M. Roy, Keith Bonawitz, Joshua B. Tenenbaum
We introduce Church, a universal language for describing stochastic generative processes.
no code implementations • NeurIPS 2011 • David Wingate, Noah Goodman, Andreas Stuhlmueller, Jeffrey M. Siskind
Probabilistic programming languages allow modelers to specify a stochastic process using syntax that resembles modern programming languages.
no code implementations • NeurIPS 2009 • Tomer Ullman, Chris Baker, Owen Macindoe, Owain Evans, Noah Goodman, Joshua B. Tenenbaum
Everyday social interactions are heavily influenced by our snap judgments about others goals.