Search Results for author: Jeremy Nixon

Found 9 papers, 4 papers with code

What are you optimizing for? Aligning Recommender Systems with Human Values

no code implementations22 Jul 2021 Jonathan Stray, Ivan Vendrov, Jeremy Nixon, Steven Adler, Dylan Hadfield-Menell

We describe cases where real recommender systems were modified in the service of various human values such as diversity, fairness, well-being, time well spent, and factual accuracy.

Diversity Fairness +1

Why Are Bootstrapped Deep Ensembles Not Better?

no code implementations NeurIPS Workshop ICBINB 2020 Jeremy Nixon, Balaji Lakshminarayanan, Dustin Tran

Ensemble methods have consistently reached state of the art across predictive, uncertainty, and out-of-distribution robustness benchmarks.

Resolving Spurious Correlations in Causal Models of Environments via Interventions

no code implementations12 Feb 2020 Sergei Volodin, Nevan Wichers, Jeremy Nixon

We consider the problem of inferring a causal model of a reinforcement learning environment and we propose a method to deal with spurious correlations.

Decision Making

Semi-Supervised Class Discovery

no code implementations10 Feb 2020 Jeremy Nixon, Jeremiah Liu, David Berthelot

One promising approach to dealing with datapoints that are outside of the initial training distribution (OOD) is to create new classes that capture similarities in the datapoints previously rejected as uncategorizable.

Analyzing the Role of Model Uncertainty for Electronic Health Records

1 code implementation10 Jun 2019 Michael W. Dusenberry, Dustin Tran, Edward Choi, Jonas Kemp, Jeremy Nixon, Ghassen Jerfel, Katherine Heller, Andrew M. Dai

We further show that RNNs with only Bayesian embeddings can be a more efficient way to capture model uncertainty compared to ensembles, and we analyze how model uncertainty is impacted across individual input features and patient subgroups.

Learned optimizers that outperform on wall-clock and validation loss

no code implementations ICLR 2019 Luke Metz, Niru Maheswaranathan, Jeremy Nixon, Daniel Freeman, Jascha Sohl-Dickstein

We demonstrate these results on problems where our learned optimizer trains convolutional networks in a fifth of the wall-clock time compared to tuned first-order methods, and with an improvement

Measuring Calibration in Deep Learning

3 code implementations2 Apr 2019 Jeremy Nixon, Mike Dusenberry, Ghassen Jerfel, Timothy Nguyen, Jeremiah Liu, Linchuan Zhang, Dustin Tran

In this paper, we perform a comprehensive empirical study of choices in calibration measures including measuring all probabilities rather than just the maximum prediction, thresholding probability values, class conditionality, number of bins, bins that are adaptive to the datapoint density, and the norm used to compare accuracies to confidences.

Understanding and correcting pathologies in the training of learned optimizers

1 code implementation24 Oct 2018 Luke Metz, Niru Maheswaranathan, Jeremy Nixon, C. Daniel Freeman, Jascha Sohl-Dickstein

Deep learning has shown that learned functions can dramatically outperform hand-designed functions on perceptual tasks.

Cannot find the paper you are looking for? You can Submit a new open access paper.