Search Results for author: Mark Collier

Found 19 papers, 9 papers with code

Implementing Neural Turing Machines

6 code implementations23 Jul 2018 Mark Collier, Joeran Beel

Our implementation learns to solve three sequential learning tasks from the original NTM paper.

Deep Contextual Multi-armed Bandits

no code implementations25 Jul 2018 Mark Collier, Hector Urdiales Llorens

Contextual multi-armed bandit problems arise frequently in important industrial applications.

Marketing Multi-Armed Bandits +1

An Empirical Comparison of Syllabuses for Curriculum Learning

1 code implementation27 Sep 2018 Mark Collier, Joeran Beel

Our experimental results provide an empirical basis for the choice of syllabus on a new problem that could benefit from curriculum learning.

Memory-Augmented Neural Networks for Machine Translation

1 code implementation WS 2019 Mark Collier, Joeran Beel

Memory-augmented neural networks (MANNs) have been shown to outperform other recurrent neural network architectures on a series of artificial sequence learning tasks, yet they have had limited application to real-world tasks.

Machine Translation Translation

Scalable Deep Unsupervised Clustering with Concrete GMVAEs

no code implementations18 Sep 2019 Mark Collier, Hector Urdiales

By applying a continuous relaxation to the discrete variables in these methods we can achieve a reduction in the training time complexity to be constant in the number of clusters used.

Clustering

A Simple Probabilistic Method for Deep Classification under Input-Dependent Label Noise

no code implementations15 Mar 2020 Mark Collier, Basil Mustafa, Efi Kokiopoulou, Rodolphe Jenatton, Jesse Berent

By tuning the softmax temperature, we improve accuracy, log-likelihood and calibration on both image classification benchmarks with controlled label noise as well as Imagenet-21k which has naturally occurring label noise.

General Classification Image Classification +2

VAEs in the Presence of Missing Data

1 code implementation9 Jun 2020 Mark Collier, Alfredo Nazabal, Christopher K. I. Williams

Real world datasets often contain entries with missing elements e. g. in a medical dataset, a patient is unlikely to have taken all possible diagnostic tests.

Imputation Missing Elements

Deep Classifiers with Label Noise Modeling and Distance Awareness

no code implementations6 Oct 2021 Vincent Fortuin, Mark Collier, Florian Wenzel, James Allingham, Jeremiah Liu, Dustin Tran, Balaji Lakshminarayanan, Jesse Berent, Rodolphe Jenatton, Effrosyni Kokiopoulou

Uncertainty estimation in deep learning has recently emerged as a crucial area of interest to advance reliability and robustness in safety-critical applications.

Out-of-Distribution Detection

Transfer and Marginalize: Explaining Away Label Noise with Privileged Information

no code implementations18 Feb 2022 Mark Collier, Rodolphe Jenatton, Efi Kokiopoulou, Jesse Berent

Supervised learning datasets often have privileged information, in the form of features which are available at training time but are not available at test time e. g. the ID of the annotator that provided the label.

Massively Scaling Heteroscedastic Classifiers

no code implementations30 Jan 2023 Mark Collier, Rodolphe Jenatton, Basil Mustafa, Neil Houlsby, Jesse Berent, Effrosyni Kokiopoulou

Heteroscedastic classifiers, which learn a multivariate Gaussian distribution over prediction logits, have been shown to perform well on image classification problems with hundreds to thousands of classes.

Classification Contrastive Learning +1

When does Privileged Information Explain Away Label Noise?

1 code implementation3 Mar 2023 Guillermo Ortiz-Jimenez, Mark Collier, Anant Nawalgaria, Alexander D'Amour, Jesse Berent, Rodolphe Jenatton, Effrosyni Kokiopoulou

Leveraging privileged information (PI), or features available during training but not at test time, has recently been shown to be an effective method for addressing label noise.

Pi-DUAL: Using Privileged Information to Distinguish Clean from Noisy Labels

no code implementations10 Oct 2023 Ke Wang, Guillermo Ortiz-Jimenez, Rodolphe Jenatton, Mark Collier, Efi Kokiopoulou, Pascal Frossard

Label noise is a pervasive problem in deep learning that often compromises the generalization performance of trained models.

Pretrained Visual Uncertainties

1 code implementation26 Feb 2024 Michael Kirchhof, Mark Collier, Seong Joon Oh, Enkelejda Kasneci

Similar to standard pretraining this enables the zero-shot transfer of uncertainties learned on a large pretraining dataset to specialized downstream datasets.

Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.