1 code implementation • 2 Feb 2023 • Marlon Tobaben, Aliaksandra Shysheya, John Bronskill, Andrew Paverd, Shruti Tople, Santiago Zanella-Beguelin, Richard E Turner, Antti Honkela
There has been significant recent progress in training differentially private (DP) models which achieve accuracy that approaches the best non-private models.
1 code implementation • 17 Jun 2022 • Aliaksandra Shysheya, John Bronskill, Massimiliano Patacchiola, Sebastian Nowozin, Richard E Turner
Modern deep learning systems are increasingly deployed in situations such as personalization and federated learning where it is necessary to support i) learning on small amounts of data, and ii) communication efficient distributed training protocols.
no code implementations • ICLR 2022 • Stratis Markou, James Requeima, Wessel Bruinsma, Anna Vaughan, Richard E Turner
Existing approaches which model output dependencies, such as Neural Processes (NPs; Garnelo et al., 2018) or the FullConvGNP (Bruinsma et al., 2021), are either complicated to train or prohibitively expensive.
no code implementations • ICML Workshop AML 2021 • Elre Talea Oldewage, John F Bronskill, Richard E Turner
This paper examines the robustness of deployed few-shot meta-learning systems when they are fed an imperceptibly perturbed few-shot dataset, showing that the resulting predictions on test inputs can become worse than chance.
no code implementations • 1 Jan 2021 • Elre Talea Oldewage, John F Bronskill, Richard E Turner
Few-shot learning systems, especially those based on meta-learning, have recently made significant advances, and are now being considered for real world problems in healthcare, personalization, and science.
no code implementations • pproximateinference AABI Symposium 2021 • Marcin B. Tomczak, Richard E Turner
Bayesian learning of neural networks is attractive as it can protecting against over-fitting and provide automatic methods for inferring important hyperparameters by maximizing the marginal probability of the data.
no code implementations • pproximateinference AABI Symposium 2021 • Rui Xia, Wessel Bruinsma, William Tebbutt, Richard E Turner
Many real-world prediction problems involve modelling the dependencies between multiple different outputs across the input space.
no code implementations • 15 Jun 2020 • Hugh Christensen, Simon Godsill, Richard E Turner
To reflect this, learning is also carried out in the presence of side information.
no code implementations • ICML Workshop LifelongML 2020 • Noel Loo, Siddharth Swaroop, Richard E Turner
The standard architecture for continual learning is a multi-headed neural network, which has shared body parameters and task-specific heads.