no code implementations • 26 May 2023 • Jannik Kossen, Mark Collier, Basil Mustafa, Xiao Wang, Xiaohua Zhai, Lucas Beyer, Andreas Steiner, Jesse Berent, Rodolphe Jenatton, Efi Kokiopoulou
With 3T, we propose a more flexible strategy that allows the image tower to benefit from both pretrained embeddings and contrastive training.
1 code implementation • 3 Mar 2023 • Guillermo Ortiz-Jimenez, Mark Collier, Anant Nawalgaria, Alexander D'Amour, Jesse Berent, Rodolphe Jenatton, Effrosyni Kokiopoulou
Leveraging privileged information (PI), or features available during training but not at test time, has recently been shown to be an effective method for addressing label noise.
no code implementations • 30 Jan 2023 • Mark Collier, Rodolphe Jenatton, Basil Mustafa, Neil Houlsby, Jesse Berent, Effrosyni Kokiopoulou
Heteroscedastic classifiers, which learn a multivariate Gaussian distribution over prediction logits, have been shown to perform well on image classification problems with hundreds to thousands of classes.
1 code implementation • 15 Jul 2022 • Dustin Tran, Jeremiah Liu, Michael W. Dusenberry, Du Phan, Mark Collier, Jie Ren, Kehang Han, Zi Wang, Zelda Mariet, Huiyi Hu, Neil Band, Tim G. J. Rudner, Karan Singhal, Zachary Nado, Joost van Amersfoort, Andreas Kirsch, Rodolphe Jenatton, Nithum Thain, Honglin Yuan, Kelly Buchanan, Kevin Murphy, D. Sculley, Yarin Gal, Zoubin Ghahramani, Jasper Snoek, Balaji Lakshminarayanan
A recent trend in artificial intelligence is the use of pretrained models for language and vision tasks, which have achieved extraordinary performance but also puzzling failures.
no code implementations • 18 Feb 2022 • Mark Collier, Rodolphe Jenatton, Efi Kokiopoulou, Jesse Berent
Supervised learning datasets often have privileged information, in the form of features which are available at training time but are not available at test time e. g. the ID of the annotator that provided the label.
no code implementations • 6 Oct 2021 • Vincent Fortuin, Mark Collier, Florian Wenzel, James Allingham, Jeremiah Liu, Dustin Tran, Balaji Lakshminarayanan, Jesse Berent, Rodolphe Jenatton, Effrosyni Kokiopoulou
Uncertainty estimation in deep learning has recently emerged as a crucial area of interest to advance reliability and robustness in safety-critical applications.
2 code implementations • 7 Jun 2021 • Zachary Nado, Neil Band, Mark Collier, Josip Djolonga, Michael W. Dusenberry, Sebastian Farquhar, Qixuan Feng, Angelos Filos, Marton Havasi, Rodolphe Jenatton, Ghassen Jerfel, Jeremiah Liu, Zelda Mariet, Jeremy Nixon, Shreyas Padhy, Jie Ren, Tim G. J. Rudner, Faris Sbahi, Yeming Wen, Florian Wenzel, Kevin Murphy, D. Sculley, Balaji Lakshminarayanan, Jasper Snoek, Yarin Gal, Dustin Tran
In this paper we introduce Uncertainty Baselines: high-quality implementations of standard and state-of-the-art deep learning methods on a variety of tasks.
no code implementations • CVPR 2021 • Mark Collier, Basil Mustafa, Efi Kokiopoulou, Rodolphe Jenatton, Jesse Berent
We place a multivariate Normal distributed latent variable on the final hidden layer of a neural network classifier.
Ranked #5 on
Image Classification
on WebVision-1000
no code implementations • 9 Sep 2020 • Mark Collier, Efi Kokiopoulou, Andrea Gesmundo, Jesse Berent
We propose the use of sparse routing networks for continual learning.
1 code implementation • 9 Jun 2020 • Mark Collier, Alfredo Nazabal, Christopher K. I. Williams
Real world datasets often contain entries with missing elements e. g. in a medical dataset, a patient is unlikely to have taken all possible diagnostic tests.
no code implementations • 15 Mar 2020 • Mark Collier, Basil Mustafa, Efi Kokiopoulou, Rodolphe Jenatton, Jesse Berent
By tuning the softmax temperature, we improve accuracy, log-likelihood and calibration on both image classification benchmarks with controlled label noise as well as Imagenet-21k which has naturally occurring label noise.
no code implementations • 18 Sep 2019 • Mark Collier, Hector Urdiales
By applying a continuous relaxation to the discrete variables in these methods we can achieve a reduction in the training time complexity to be constant in the number of clusters used.
1 code implementation • WS 2019 • Mark Collier, Joeran Beel
Memory-augmented neural networks (MANNs) have been shown to outperform other recurrent neural network architectures on a series of artificial sequence learning tasks, yet they have had limited application to real-world tasks.
1 code implementation • 27 Sep 2018 • Mark Collier, Joeran Beel
Our experimental results provide an empirical basis for the choice of syllabus on a new problem that could benefit from curriculum learning.
no code implementations • 25 Jul 2018 • Mark Collier, Hector Urdiales Llorens
Contextual multi-armed bandit problems arise frequently in important industrial applications.
7 code implementations • 23 Jul 2018 • Mark Collier, Joeran Beel
Our implementation learns to solve three sequential learning tasks from the original NTM paper.