Search Results for author: Michael Thomas Smith

Found 8 papers, 1 papers with code

Adversarial Vulnerability Bounds for Gaussian Process Classification

no code implementations19 Sep 2019 Michael Thomas Smith, Kathrin Grosse, Michael Backes, Mauricio A. Alvarez

To protect against this we devise an adversarial bound (AB) for a Gaussian process classifier, that holds for the entire input domain, bounding the potential for any future adversarial method to cause such misclassification.

Classification General Classification

Differentially Private Regression and Classification with Sparse Gaussian Processes

no code implementations19 Sep 2019 Michael Thomas Smith, Mauricio A. Alvarez, Neil D. Lawrence

We experiment with the use of inducing points to provide a sparse approximation and show that these can provide robust differential privacy in outlier areas and at higher dimensions.

Classification Gaussian Processes +2

Multi-task Learning for Aggregated Data using Gaussian Processes

2 code implementations NeurIPS 2019 Fariba Yousefi, Michael Thomas Smith, Mauricio A. Álvarez

Our model represents each task as the linear combination of the realizations of latent processes that are integrated at a different scale per task.

Air Pollution Prediction Epidemiology +2

The Limitations of Model Uncertainty in Adversarial Settings

no code implementations6 Dec 2018 Kathrin Grosse, David Pfaff, Michael Thomas Smith, Michael Backes

Machine learning models are vulnerable to adversarial examples: minor perturbations to input samples intended to deliberately cause misclassification.

BIG-bench Machine Learning Gaussian Processes

Differentially Private Gaussian Processes

no code implementations2 Jun 2016 Michael Thomas Smith, Max Zwiessele, Neil D. Lawrence

A major challenge for machine learning is increasing the availability of data while respecting the privacy of individuals.

Gaussian Processes regression

Cannot find the paper you are looking for? You can Submit a new open access paper.