no code implementations • 4 May 2022 • Michael Thomas Smith, Magnus Ross, Joel Ssematimba, Pablo A. Alvarado, Mauricio Alvarez, Engineer Bainomugisha, Richard Wilkinson
Networks of low-cost sensors are becoming ubiquitous, but often suffer from poor accuracies and drift.
no code implementations • 19 Sep 2019 • Michael Thomas Smith, Mauricio A. Alvarez, Neil D. Lawrence
We experiment with the use of inducing points to provide a sparse approximation and show that these can provide robust differential privacy in outlier areas and at higher dimensions.
no code implementations • 19 Sep 2019 • Michael Thomas Smith, Kathrin Grosse, Michael Backes, Mauricio A. Alvarez
To protect against this we devise an adversarial bound (AB) for a Gaussian process classifier, that holds for the entire input domain, bounding the potential for any future adversarial method to cause such misclassification.
2 code implementations • NeurIPS 2019 • Fariba Yousefi, Michael Thomas Smith, Mauricio A. Álvarez
Our model represents each task as the linear combination of the realizations of latent processes that are integrated at a different scale per task.
no code implementations • 6 Dec 2018 • Kathrin Grosse, David Pfaff, Michael Thomas Smith, Michael Backes
Machine learning models are vulnerable to adversarial examples: minor perturbations to input samples intended to deliberately cause misclassification.
no code implementations • 6 Sep 2018 • Michael Thomas Smith, Mauricio A. Alvarez, Neil D. Lawrence
Many datasets are in the form of tables of binned data.
no code implementations • 17 Nov 2017 • Kathrin Grosse, David Pfaff, Michael Thomas Smith, Michael Backes
In this paper, we leverage Gaussian Processes to investigate adversarial examples in the framework of Bayesian inference.
no code implementations • 2 Jun 2016 • Michael Thomas Smith, Max Zwiessele, Neil D. Lawrence
A major challenge for machine learning is increasing the availability of data while respecting the privacy of individuals.