1 code implementation • 2 Feb 2023 • Marlon Tobaben, Aliaksandra Shysheya, John Bronskill, Andrew Paverd, Shruti Tople, Santiago Zanella-Beguelin, Richard E Turner, Antti Honkela
There has been significant recent progress in training differentially private (DP) models which achieve accuracy that approaches the best non-private models.
no code implementations • 23 Nov 2022 • Elre T. Oldewage, John Bronskill, Richard E. Turner
This paper examines the robustness of deployed few-shot meta-learning systems when they are fed an imperceptibly perturbed few-shot dataset.
1 code implementation • 20 Jun 2022 • Massimiliano Patacchiola, John Bronskill, Aliaksandra Shysheya, Katja Hofmann, Sebastian Nowozin, Richard E. Turner
In this paper we push this Pareto frontier in the few-shot image classification setting with a key contribution: a new adaptive block called Contextual Squeeze-and-Excitation (CaSE) that adjusts a pretrained neural network on a new task to significantly improve performance with a single forward pass of the user data (context).
Ranked #3 on Few-Shot Image Classification on Meta-Dataset
1 code implementation • 17 Jun 2022 • Aliaksandra Shysheya, John Bronskill, Massimiliano Patacchiola, Sebastian Nowozin, Richard E Turner
Modern deep learning systems are increasingly deployed in situations such as personalization and federated learning where it is necessary to support i) learning on small amounts of data, and ii) communication efficient distributed training protocols.
2 code implementations • NeurIPS 2021 • John Bronskill, Daniela Massiceti, Massimiliano Patacchiola, Katja Hofmann, Sebastian Nowozin, Richard E. Turner
This limitation arises because a task's entire support set, which can contain up to 1000 images, must be processed before an optimization step can be taken.
1 code implementation • ICCV 2021 • Daniela Massiceti, Luisa Zintgraf, John Bronskill, Lida Theodorou, Matthew Tobias Harris, Edward Cutrell, Cecily Morrison, Katja Hofmann, Simone Stumpf
To close this gap, we present the ORBIT dataset and benchmark, grounded in the real-world application of teachable object recognizers for people who are blind/low-vision.
2 code implementations • ICML 2020 • John Bronskill, Jonathan Gordon, James Requeima, Sebastian Nowozin, Richard E. Turner
Modern meta-learning approaches for image classification rely on increasingly deep networks to achieve state-of-the-art performance, making batch normalization an essential component of meta-learning pipelines.
1 code implementation • NeurIPS 2019 • James Requeima, Jonathan Gordon, John Bronskill, Sebastian Nowozin, Richard E. Turner
We introduce a conditional neural process based approach to the multi-task classification setting for this purpose, and establish connections to the meta-learning and few-shot learning literature.
Ranked #6 on Few-Shot Image Classification on Meta-Dataset Rank
1 code implementation • ICLR 2019 • Jonathan Gordon, John Bronskill, Matthias Bauer, Sebastian Nowozin, Richard E. Turner
2) We introduce VERSA, an instance of the framework employing a flexible and versatile amortization network that takes few-shot learning datasets as inputs, with arbitrary numbers of shots, and outputs a distribution over task-specific parameters in a single forward pass.