Paper

Scaling Bayesian inference of mixed multinomial logit models to very large datasets

Variational inference methods have been shown to lead to significant improvements in the computational efficiency of approximate Bayesian inference in mixed multinomial logit models when compared to standard Markov-chain Monte Carlo (MCMC) methods without compromising accuracy. However, despite their demonstrated efficiency gains, existing methods still suffer from important limitations that prevent them to scale to very large datasets, while providing the flexibility to allow for rich prior distributions and to capture complex posterior distributions. In this paper, we propose an Amortized Variational Inference approach that leverages stochastic backpropagation, automatic differentiation and GPU-accelerated computation, for effectively scaling Bayesian inference in Mixed Multinomial Logit models to very large datasets. Moreover, we show how normalizing flows can be used to increase the flexibility of the variational posterior approximations. Through an extensive simulation study, we empirically show that the proposed approach is able to achieve computational speedups of multiple orders of magnitude over traditional MSLE and MCMC approaches for large datasets without compromising estimation accuracy.

Results in Papers With Code
(↓ scroll down to see all results)