Search Results for author: James McInerney

Found 12 papers, 4 papers with code

Counterfactual Evaluation of Slate Recommendations with Sequential Reward Interactions

1 code implementation25 Jul 2020 James McInerney, Brian Brost, Praveen Chandar, Rishabh Mehrotra, Ben Carterette

Users of music streaming, video streaming, news recommendation, and e-commerce services often engage with content in a sequential manner.

counterfactual News Recommendation +2

The Implicit Delta Method

1 code implementation11 Nov 2022 Nathan Kallus, James McInerney

When the predictive model is simple and its evaluation differentiable, this task is solved by the delta method, where we propagate the asymptotically-normal uncertainty in the predictive model through the evaluation to compute standard errors and Wald confidence intervals.

Uncertainty Quantification

Variational Tempering

no code implementations7 Nov 2014 Stephan Mandt, James McInerney, Farhan Abrol, Rajesh Ranganath, David Blei

Lastly, we develop local variational tempering, which assigns a latent temperature to each data point; this allows for dynamic annealing that varies across data.

Variational Inference

Dynamic Poisson Factorization

no code implementations15 Sep 2015 Laurent Charlin, Rajesh Ranganath, James McInerney, David M. Blei

Models for recommender systems use latent factors to explain the preferences and behaviors of users with respect to a set of items (e. g., movies, books, academic papers).

Recommendation Systems Variational Inference

Learning Periodic Human Behaviour Models from Sparse Data for Crowdsourcing Aid Delivery in Developing Countries

no code implementations26 Sep 2013 James McInerney, Alex Rogers, Nicholas R. Jennings

In many developing countries, half the population lives in rural locations, where access to essentials such as school materials, mosquito nets, and medical supplies is restricted.

An Empirical Bayes Approach to Optimizing Machine Learning Algorithms

no code implementations NeurIPS 2017 James Mcinerney

EB-Hyp suggests a simpler approach to evaluating and deploying machine learning algorithms that does not require a separate validation data set and hyperparameter selection procedure.

Bayesian Optimization BIG-bench Machine Learning

Residual Overfit Method of Exploration

no code implementations6 Oct 2021 James McInerney, Nathan Kallus

The approach, which we term the residual overfit method of exploration (ROME), drives exploration towards actions where the overfit model exhibits the most overfitting compared to the tuned model.

Uncertainty Quantification

Hessian-Free Laplace in Bayesian Deep Learning

no code implementations15 Mar 2024 James McInerney, Nathan Kallus

The Laplace approximation (LA) of the Bayesian posterior is a Gaussian distribution centered at the maximum a posteriori estimate.

Cannot find the paper you are looking for? You can Submit a new open access paper.