Search Results for author: Sandamal Weerasinghe

Found 3 papers, 3 papers with code

Local Intrinsic Dimensionality Signals Adversarial Perturbations

1 code implementation24 Sep 2021 Sandamal Weerasinghe, Tansu Alpcan, Sarah M. Erfani, Christopher Leckie, Benjamin I. P. Rubinstein

In this paper, we derive a lower-bound and an upper-bound for the LID value of a perturbed data point and demonstrate that the bounds, in particular the lower-bound, has a positive correlation with the magnitude of the perturbation.

BIG-bench Machine Learning

Defending Distributed Classifiers Against Data Poisoning Attacks

1 code implementation21 Aug 2020 Sandamal Weerasinghe, Tansu Alpcan, Sarah M. Erfani, Christopher Leckie

We introduce a weighted SVM against such attacks using K-LID as a distinguishing characteristic that de-emphasizes the effect of suspicious data samples on the SVM decision boundary.

Data Poisoning

Defending Regression Learners Against Poisoning Attacks

1 code implementation21 Aug 2020 Sandamal Weerasinghe, Sarah M. Erfani, Tansu Alpcan, Christopher Leckie, Justin Kopacz

Regression models, which are widely used from engineering applications to financial forecasting, are vulnerable to targeted malicious attacks such as training data poisoning, through which adversaries can manipulate their predictions.

Data Poisoning regression

Cannot find the paper you are looking for? You can Submit a new open access paper.