no code implementations • 19 May 2022 • Mike Wu, Noah Goodman
Given a probabilistic program, we are interested in the task of posterior inference: estimating a latent variable given a set of observed variables.
no code implementations • 26 Apr 2022 • Rose E. Wang, Mike Wu, Noah Goodman
The teacher must interact and diagnose the student, before teaching.
1 code implementation • 18 Jan 2022 • Mike Wu, Will McTighe, Kaili Wang, Istvan A. Seres, Nick Bax, Manuel Puebla, Mariano Mendez, Federico Carrone, Tomás De Mattey, Herman O. Demaestri, Mariano Nicolini, Pedro Fontana
Mixers, such as Tornado Cash, were developed to preserve privacy through "mixing" transactions with those of others in an anonymity pool, making it harder to link deposits and withdrawals from the pool.
no code implementations • 10 Dec 2021 • Ananya Karthik, Mike Wu, Noah Goodman, Alex Tamkin
Contrastive learning has made considerable progress in computer vision, outperforming supervised pretraining on a range of downstream datasets.
1 code implementation • 8 Oct 2021 • Oliver Zhang, Mike Wu, Jasmine Bayrooti, Noah Goodman
In this paper, we propose a simple way to generate uncertainty scores for many contrastive methods by re-purposing temperature, a mysterious hyperparameter used for scaling.
no code implementations • 26 Aug 2021 • Mike Wu, Richard L. Davis, Benjamin W. Domingue, Chris Piech, Noah Goodman
Item Response Theory (IRT) is a ubiquitous model for understanding human behaviors and attitudes based on their responses to questions.
1 code implementation • 23 Jul 2021 • Mike Wu, Noah Goodman, Chris Piech, Chelsea Finn
High-quality computer science education is limited by the difficulty of providing instructor feedback to students at scale.
no code implementations • NeurIPS 2021 • Mike Wu, Noah Goodman, Stefano Ermon
In traditional software programs, it is easy to trace program logic from variables back to input, apply assertion statements to block erroneous behavior, and compose programs together.
2 code implementations • 26 Oct 2020 • Mike Wu, Jonathan Nafziger, Anthony Scodary, Andrew Maas
We introduce HarperValleyBank, a free, public domain spoken dialog corpus.
1 code implementation • ICLR 2021 • Alex Tamkin, Mike Wu, Noah Goodman
However, designing these views requires considerable trial and error by human experts, hindering widespread adoption of unsupervised representation learning methods across domains and modalities.
1 code implementation • ICLR 2021 • Mike Wu, Milan Mosse, Chengxu Zhuang, Daniel Yamins, Noah Goodman
To do this, we introduce a family of mutual information estimators that sample negatives conditionally -- in a "ring" around each positive.
no code implementations • 5 Oct 2020 • Mike Wu, Noah Goodman
Contrastive approaches to representation learning have recently shown great promise.
no code implementations • 27 May 2020 • Mike Wu, Chengxu Zhuang, Milan Mosse, Daniel Yamins, Noah Goodman
Reformulating previous learning objectives in terms of mutual information also simplifies and stabilizes them.
1 code implementation • 1 Feb 2020 • Mike Wu, Richard L. Davis, Benjamin W. Domingue, Chris Piech, Noah Goodman
Item Response Theory (IRT) is a ubiquitous model for understanding humans based on their responses to questions, used in fields as diverse as education, medicine and psychology.
no code implementations • 11 Dec 2019 • Mike Wu, Noah Goodman
As part of our derivation we find that many previous multimodal variational autoencoders used objectives that do not correctly bound the joint marginal likelihood across modalities.
no code implementations • 19 Aug 2019 • Zhiyuan He, Danchen Lin, Thomas Lau, Mike Wu
In this survey, we discuss several different types of gradient boosting algorithms and illustrate their mathematical frameworks in detail: 1. introduction of gradient boosting leads to 2. objective function optimization, 3. loss function estimations, and 4. model constructions.
no code implementations • 14 Aug 2019 • Mike Wu, Sonali Parbhoo, Michael C. Hughes, Volker Roth, Finale Doshi-Velez
Moreover, for situations in which a single, global tree is a poor estimator, we introduce a regional tree regularizer that encourages the deep model to resemble a compact, axis-aligned decision tree in predefined, human-interpretable contexts.
no code implementations • 13 Aug 2019 • Mike Wu, Sonali Parbhoo, Michael Hughes, Ryan Kindle, Leo Celi, Maurizio Zazzi, Volker Roth, Finale Doshi-Velez
The lack of interpretability remains a barrier to the adoption of deep neural networks.
no code implementations • 23 May 2019 • Ali Malik, Mike Wu, Vrinda Vasavada, Jinpeng Song, Madison Coots, John Mitchell, Noah Goodman, Chris Piech
In this paper, we present generative grading: a novel computational approach for providing feedback at scale that is capable of accurately grading student work and providing nuanced, interpretable feedback.
1 code implementation • 11 Mar 2019 • Judith Fan, Robert Hawkins, Mike Wu, Noah Goodman
On each trial, both participants were shown the same four objects, but in different locations.
1 code implementation • 5 Feb 2019 • Mike Wu, Kristy Choi, Noah Goodman, Stefano Ermon
Despite the recent success in probabilistic modeling and their applications, generative models trained using traditional inference techniques struggle to adapt to new distributions, even when the target distribution may be closely related to the ones seen during training.
1 code implementation • 5 Oct 2018 • Mike Wu, Noah Goodman, Stefano Ermon
Stochastic optimization techniques are standard in variational inference algorithms.
1 code implementation • 5 Sep 2018 • Mike Wu, Milan Mosse, Noah Goodman, Chris Piech
Rubric sampling requires minimal teacher effort, can associate feedback with specific parts of a student's solution and can articulate a student's misconceptions in the language of the instructor.
3 code implementations • NeurIPS 2018 • Mike Wu, Noah Goodman
Multiple modalities often co-occur when describing natural phenomena.
2 code implementations • 16 Nov 2017 • Mike Wu, Michael C. Hughes, Sonali Parbhoo, Maurizio Zazzi, Volker Roth, Finale Doshi-Velez
The lack of interpretability remains a key barrier to the adoption of deep models in many applications.
no code implementations • 14 Jun 2016 • Mike Wu, Yura Perov, Frank Wood, Hongseok Yang
We demonstrate this by developing a native Excel implementation of both a particle Markov Chain Monte Carlo variant and black-box variational inference for spreadsheet probabilistic programming.
no code implementations • 24 Mar 2016 • Stephen Yu, Mike Wu
The proposed method uses live image footage which, based on calculations of pixel motion, decides whether or not an object is in the blind-spot.
no code implementations • 8 Mar 2015 • Mike Wu
Given financial data from popular sites like Yahoo and the London Exchange, the presented paper attempts to model and predict stocks that can be considered "good investments".