Search Results for author: Noah Goodman

Found 58 papers, 35 papers with code

Bayesian Preference Elicitation with Language Models

no code implementations8 Mar 2024 Kunal Handa, Yarin Gal, Ellie Pavlick, Noah Goodman, Jacob Andreas, Alex Tamkin, Belinda Z. Li

We introduce OPEN (Optimal Preference Elicitation with Natural language) a framework that uses BOED to guide the choice of informative questions and an LM to extract features and translate abstract BOED queries into natural language questions.

Experimental Design

Backtracing: Retrieving the Cause of the Query

1 code implementation6 Mar 2024 Rose E. Wang, Pawan Wirawarn, Omar Khattab, Noah Goodman, Dorottya Demszky

While information retrieval (IR) systems may provide answers for such user queries, they do not directly assist content creators -- such as lecturers who want to improve their content -- identify segments that _caused_ a user to ask those questions.

Information Retrieval Language Modelling +2

Eliciting Human Preferences with Language Models

1 code implementation17 Oct 2023 Belinda Z. Li, Alex Tamkin, Noah Goodman, Jacob Andreas

Language models (LMs) can be directed to perform target tasks by using labeled examples or natural language prompts.

SIGHT: A Large Annotated Dataset on Student Insights Gathered from Higher Education Transcripts

1 code implementation15 Jun 2023 Rose E. Wang, Pawan Wirawarn, Noah Goodman, Dorottya Demszky

To overcome this challenge, we propose a set of best practices for using large language models (LLMs) to cheaply classify the comments at scale.

Math

Generating Language Corrections for Teaching Physical Control Tasks

1 code implementation12 Jun 2023 Megha Srivastava, Noah Goodman, Dorsa Sadigh

AI assistance continues to help advance applications in education, from language learning to intelligent tutoring systems, yet current methods for providing students feedback are still quite limited.

valid

Learning to Compress Prompts with Gist Tokens

1 code implementation NeurIPS 2023 Jesse Mu, Xiang Lisa Li, Noah Goodman

Prompting is the primary way to utilize the multitask capabilities of language models (LMs), but prompts occupy valuable space in the input context window, and repeatedly encoding the same prompt is computationally inefficient.

Multispectral Contrastive Learning with Viewmaker Networks

1 code implementation11 Feb 2023 Jasmine Bayrooti, Noah Goodman, Alex Tamkin

Contrastive learning methods have been applied to a range of domains and modalities by training models to identify similar "views" of data points.

Contrastive Learning Self-Supervised Learning

Task Ambiguity in Humans and Language Models

no code implementations20 Dec 2022 Alex Tamkin, Kunal Handa, Avash Shrestha, Noah Goodman

We investigate how both humans and models behave in the face of such task ambiguity by proposing AmbiBench, a new benchmark of six ambiguously-specified classification tasks.

Assistive Teaching of Motor Control Tasks to Humans

1 code implementation25 Nov 2022 Megha Srivastava, Erdem Biyik, Suvir Mirchandani, Noah Goodman, Dorsa Sadigh

In this paper, we focus on the problem of assistive teaching of motor control tasks such as parking a car or landing an aircraft.

Reinforcement Learning (RL)

Foundation Posteriors for Approximate Probabilistic Inference

no code implementations19 May 2022 Mike Wu, Noah Goodman

Given a probabilistic program, we are interested in the task of posterior inference: estimating a latent variable given a set of observed variables.

Language Modelling Masked Language Modeling +1

Active Learning Helps Pretrained Models Learn the Intended Task

1 code implementation18 Apr 2022 Alex Tamkin, Dat Nguyen, Salil Deshpande, Jesse Mu, Noah Goodman

Models can fail in unpredictable ways during deployment due to task ambiguity, when multiple behaviors are consistent with the provided training data.

Active Learning

Language modeling via stochastic processes

1 code implementation ICLR 2022 Rose E Wang, Esin Durmus, Noah Goodman, Tatsunori Hashimoto

Recent work in self-supervised learning suggests that models can learn good latent representations via contrastive learning, which can be effective for discriminative tasks.

Contrastive Learning Language Modelling +3

Tradeoffs Between Contrastive and Supervised Learning: An Empirical Study

no code implementations10 Dec 2021 Ananya Karthik, Mike Wu, Noah Goodman, Alex Tamkin

Contrastive learning has made considerable progress in computer vision, outperforming supervised pretraining on a range of downstream datasets.

Contrastive Learning Image Classification

DABS: A Domain-Agnostic Benchmark for Self-Supervised Learning

1 code implementation23 Nov 2021 Alex Tamkin, Vincent Liu, Rongfei Lu, Daniel Fein, Colin Schultz, Noah Goodman

Self-supervised learning algorithms, including BERT and SimCLR, have enabled significant strides in fields like natural language processing, computer vision, and speech processing.

Self-Supervised Learning

Temperature as Uncertainty in Contrastive Learning

1 code implementation8 Oct 2021 Oliver Zhang, Mike Wu, Jasmine Bayrooti, Noah Goodman

In this paper, we propose a simple way to generate uncertainty scores for many contrastive methods by re-purposing temperature, a mysterious hyperparameter used for scaling.

Contrastive Learning Out-of-Distribution Detection

Pretrained models are active learners

no code implementations29 Sep 2021 Alex Tamkin, Dat Nguyen, Salil Deshpande, Jesse Mu, Noah Goodman

An important barrier to the safe deployment of machine learning systems is the risk of \emph{task ambiguity}, where multiple behaviors are consistent with the provided examples.

Active Learning

Modeling Item Response Theory with Stochastic Variational Inference

no code implementations26 Aug 2021 Mike Wu, Richard L. Davis, Benjamin W. Domingue, Chris Piech, Noah Goodman

Item Response Theory (IRT) is a ubiquitous model for understanding human behaviors and attitudes based on their responses to questions.

Bayesian Inference Variational Inference

On the Opportunities and Risks of Foundation Models

2 code implementations16 Aug 2021 Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, aditi raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang

AI is undergoing a paradigm shift with the rise of models (e. g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks.

Transfer Learning

ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback

1 code implementation23 Jul 2021 Mike Wu, Noah Goodman, Chris Piech, Chelsea Finn

High-quality computer science education is limited by the difficulty of providing instructor feedback to students at scale.

Few-Shot Learning

Contrastive Reinforcement Learning of Symbolic Reasoning Domains

1 code implementation NeurIPS 2021 Gabriel Poesia, WenXin Dong, Noah Goodman

Our results suggest new directions for reinforcement learning in symbolic domains, as well as applications to mathematics education.

reinforcement-learning Reinforcement Learning (RL)

Question Generation for Adaptive Education

1 code implementation ACL 2021 Megha Srivastava, Noah Goodman

Intelligent and adaptive online education systems aim to make high-quality education available for a diverse range of students.

Knowledge Tracing Question Generation +2

Emergent Communication of Generalizations

1 code implementation NeurIPS 2021 Jesse Mu, Noah Goodman

To build agents that can collaborate effectively with others, recent research has trained artificial agents to communicate with each other in Lewis-style referential games.

Improving Compositionality of Neural Networks by Decoding Representations to Inputs

no code implementations NeurIPS 2021 Mike Wu, Noah Goodman, Stefano Ermon

In traditional software programs, it is easy to trace program logic from variables back to input, apply assertion statements to block erroneous behavior, and compose programs together.

Fairness Out-of-Distribution Detection

Viewmaker Networks: Learning Views for Unsupervised Representation Learning

1 code implementation ICLR 2021 Alex Tamkin, Mike Wu, Noah Goodman

However, designing these views requires considerable trial and error by human experts, hindering widespread adoption of unsupervised representation learning methods across domains and modalities.

Contrastive Learning Representation Learning

Conditional Negative Sampling for Contrastive Learning of Visual Representations

1 code implementation ICLR 2021 Mike Wu, Milan Mosse, Chengxu Zhuang, Daniel Yamins, Noah Goodman

To do this, we introduce a family of mutual information estimators that sample negatives conditionally -- in a "ring" around each positive.

Contrastive Learning Instance Segmentation +4

Variational Item Response Theory: Fast, Accurate, and Expressive

1 code implementation1 Feb 2020 Mike Wu, Richard L. Davis, Benjamin W. Domingue, Chris Piech, Noah Goodman

Item Response Theory (IRT) is a ubiquitous model for understanding humans based on their responses to questions, used in fields as diverse as education, medicine and psychology.

Bayesian Inference

Multimodal Generative Models for Compositional Representation Learning

no code implementations11 Dec 2019 Mike Wu, Noah Goodman

As part of our derivation we find that many previous multimodal variational autoencoders used objectives that do not correctly bound the joint marginal likelihood across modalities.

Representation Learning

Shaping Visual Representations with Language for Few-shot Classification

2 code implementations ACL 2020 Jesse Mu, Percy Liang, Noah Goodman

By describing the features and abstractions of our world, language is a crucial tool for human learning and a promising source of supervision for machine learning models.

Classification General Classification +2

Learning from Omission

no code implementations ACL 2019 Bill McDowell, Noah Goodman

Pragmatic reasoning allows humans to go beyond the literal meaning when interpret- ing language in context.

Generative Grading: Near Human-level Accuracy for Automated Feedback on Richly Structured Problems

1 code implementation23 May 2019 Ali Malik, Mike Wu, Vrinda Vasavada, Jinpeng Song, Madison Coots, John Mitchell, Noah Goodman, Chris Piech

In this paper, we present generative grading: a novel computational approach for providing feedback at scale that is capable of accurately grading student work and providing nuanced, interpretable feedback.

Pragmatic inference and visual abstraction enable contextual flexibility during visual communication

1 code implementation11 Mar 2019 Judith Fan, Robert Hawkins, Mike Wu, Noah Goodman

On each trial, both participants were shown the same four objects, but in different locations.

Lost in Machine Translation: A Method to Reduce Meaning Loss

1 code implementation NAACL 2019 Reuben Cohn-Gordon, Noah Goodman

A desideratum of high-quality translation systems is that they preserve meaning, in the sense that two sentences with different meanings should not translate to one and the same sentence in another language.

Machine Translation Sentence +1

Tensor Variable Elimination for Plated Factor Graphs

no code implementations8 Feb 2019 Fritz Obermeyer, Eli Bingham, Martin Jankowiak, Justin Chiu, Neeraj Pradhan, Alexander Rush, Noah Goodman

To exploit efficient tensor algebra in graphs with plates of variables, we generalize undirected factor graphs to plated factor graphs and variable elimination to a tensor variable elimination algorithm that operates directly on plated factor graphs.

Music Modeling Probabilistic Programming +1

Meta-Amortized Variational Inference and Learning

1 code implementation5 Feb 2019 Mike Wu, Kristy Choi, Noah Goodman, Stefano Ermon

Despite the recent success in probabilistic modeling and their applications, generative models trained using traditional inference techniques struggle to adapt to new distributions, even when the target distribution may be closely related to the ones seen during training.

Clustering Density Estimation +2

Zero Shot Learning for Code Education: Rubric Sampling with Deep Learning Inference

1 code implementation5 Sep 2018 Mike Wu, Milan Mosse, Noah Goodman, Chris Piech

Rubric sampling requires minimal teacher effort, can associate feedback with specific parts of a student's solution and can articulate a student's misconceptions in the language of the instructor.

Misconceptions Zero-Shot Learning

Pragmatically Informative Image Captioning with Character-Level Inference

no code implementations NAACL 2018 Reuben Cohn-Gordon, Noah Goodman, Christopher Potts

We combine a neural image captioner with a Rational Speech Acts (RSA) model to make a system that is pragmatically informative: its objective is to produce captions that are not merely true but also distinguish their inputs from similar images.

Image Captioning Rolling Shutter Correction

Practical optimal experiment design with probabilistic programs

no code implementations17 Aug 2016 Long Ouyang, Michael Henry Tessler, Daniel Ly, Noah Goodman

PPLs offer a clean separation between declaring problems and solving them, which means that the scientist can automate experiment design by simply declaring her model and experiment spaces in the PPL without having to worry about the details of calculating information gain.

Probabilistic Programming

Learning and using language via recursive pragmatic reasoning about other agents

no code implementations NeurIPS 2013 Nathaniel J. Smith, Noah Goodman, Michael Frank

Language users are remarkably good at making inferences about speakers' intentions in context, and children learning their native language also display substantial skill in acquiring the meanings of unknown words.

Learning Stochastic Inverses

no code implementations NeurIPS 2013 Andreas Stuhlmüller, Jacob Taylor, Noah Goodman

We describe a class of algorithms for amortized inference in Bayesian networks.

Church: a language for generative models

no code implementations13 Jun 2012 Noah Goodman, Vikash Mansinghka, Daniel M. Roy, Keith Bonawitz, Joshua B. Tenenbaum

We introduce Church, a universal language for describing stochastic generative processes.

Clustering

Nonstandard Interpretations of Probabilistic Programs for Efficient Inference

no code implementations NeurIPS 2011 David Wingate, Noah Goodman, Andreas Stuhlmueller, Jeffrey M. Siskind

Probabilistic programming languages allow modelers to specify a stochastic process using syntax that resembles modern programming languages.

Probabilistic Programming

Cannot find the paper you are looking for? You can Submit a new open access paper.