Search Results for author: Brady Neal

Found 8 papers, 2 papers with code

Empirical Analysis of Model Selection for Heterogeneous Causal Effect Estimation

1 code implementation3 Nov 2022 Divyat Mahajan, Ioannis Mitliagkas, Brady Neal, Vasilis Syrgkanis

We study the problem of model selection in causal inference, specifically for the case of conditional average treatment effect (CATE) estimation under binary treatments.

AutoML Causal Inference +2

RealCause: Realistic Causal Inference Benchmarking

no code implementations30 Nov 2020 Brady Neal, Chin-wei Huang, Sunand Raghupathi

However, the best causal estimators on synthetic data are unlikely to be the best causal estimators on real data.

Benchmarking Causal Inference

In Search of Robust Measures of Generalization

1 code implementation NeurIPS 2020 Gintare Karolina Dziugaite, Alexandre Drouin, Brady Neal, Nitarshan Rajkumar, Ethan Caballero, Linbo Wang, Ioannis Mitliagkas, Daniel M. Roy

A large volume of work aims to close this gap, primarily by developing bounds on generalization error, optimization error, and excess risk.

Generalization Bounds

On the Bias-Variance Tradeoff: Textbooks Need an Update

no code implementations17 Dec 2019 Brady Neal

Through extensive experiments and analysis, we show a lack of a bias-variance tradeoff in neural networks when increasing network width.

In Support of Over-Parametrization in Deep Reinforcement Learning: an Empirical Study

no code implementations ICML Workshop Deep_Phenomen 2019 Brady Neal, Ioannis Mitliagkas

There is significant recent evidence in supervised learning that, in the over-parametrized setting, wider networks achieve better test error.

OpenAI Gym reinforcement-learning +1

A Modern Take on the Bias-Variance Tradeoff in Neural Networks

no code implementations19 Oct 2018 Brady Neal, Sarthak Mittal, Aristide Baratin, Vinayak Tantia, Matthew Scicluna, Simon Lacoste-Julien, Ioannis Mitliagkas

The bias-variance tradeoff tells us that as model complexity increases, bias falls and variances increases, leading to a U-shaped test error curve.

Learning Generative Models with Locally Disentangled Latent Factors

no code implementations ICLR 2018 Brady Neal, Alex Lamb, Sherjil Ozair, Devon Hjelm, Aaron Courville, Yoshua Bengio, Ioannis Mitliagkas

One of the most successful techniques in generative models has been decomposing a complicated generation task into a series of simpler generation tasks.

How well does your sampler really work?

no code implementations16 Dec 2017 Ryan Turner, Brady Neal

We present a new data-driven benchmark system to evaluate the performance of new MCMC samplers.

Meta-Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.