no code implementations • 10 Nov 2022 • Zixi Chen, Varshini Subhash, Marton Havasi, Weiwei Pan, Finale Doshi-Velez
In this work, we survey properties defined in interpretable machine learning papers, synthesize them based on what they actually measure, and describe the trade-offs between different formulations of these properties.
3 code implementations • 7 Jun 2021 • Zachary Nado, Neil Band, Mark Collier, Josip Djolonga, Michael W. Dusenberry, Sebastian Farquhar, Qixuan Feng, Angelos Filos, Marton Havasi, Rodolphe Jenatton, Ghassen Jerfel, Jeremiah Liu, Zelda Mariet, Jeremy Nixon, Shreyas Padhy, Jie Ren, Tim G. J. Rudner, Faris Sbahi, Yeming Wen, Florian Wenzel, Kevin Murphy, D. Sculley, Balaji Lakshminarayanan, Jasper Snoek, Yarin Gal, Dustin Tran
In this paper we introduce Uncertainty Baselines: high-quality implementations of standard and state-of-the-art deep learning methods on a variety of tasks.
2 code implementations • ICLR 2021 • Marton Havasi, Rodolphe Jenatton, Stanislav Fort, Jeremiah Zhe Liu, Jasper Snoek, Balaji Lakshminarayanan, Andrew M. Dai, Dustin Tran
Recent approaches to efficiently ensemble neural networks have shown that strong robustness and uncertainty performance can be achieved with a negligible gain in parameters over the original network.
1 code implementation • NeurIPS 2020 • Gergely Flamich, Marton Havasi, José Miguel Hernández-Lobato
Variational Autoencoders (VAEs) have seen widespread use in learned image compression.
no code implementations • 25 Sep 2019 • Marton Havasi, Jasper Snoek, Dustin Tran, Jonathan Gordon, José Miguel Hernández-Lobato
Variational inference (VI) is a popular approach for approximate Bayesian inference that is particularly promising for highly parameterized models such as deep neural networks.
no code implementations • 25 Sep 2019 • Gergely Flamich, Marton Havasi, José Miguel Hernández-Lobato
Standard compression algorithms work by mapping an image to discrete code using an encoder from which the original image can be reconstructed through a decoder.
2 code implementations • ICLR 2019 • Marton Havasi, Robert Peharz, José Miguel Hernández-Lobato
While deep neural networks are a highly successful model class, their large memory footprint puts considerable strain on energy consumption, communication bandwidth, and storage requirements.
3 code implementations • NeurIPS 2018 • Marton Havasi, José Miguel Hernández-Lobato, Juan José Murillo-Fuentes
The current state-of-the-art inference method, Variational Inference (VI), employs a Gaussian approximation to the posterior distribution.
no code implementations • 9 Jan 2018 • Marton Havasi, José Miguel Hernández-Lobato, Juan José Murillo-Fuentes
Deep Gaussian Processes (DGP) are hierarchical generalizations of Gaussian Processes (GP) that have proven to work effectively on a multiple supervised regression tasks.