Search Results for author: Sarah M. Erfani

Found 20 papers, 7 papers with code

Be Persistent: Towards a Unified Solution for Mitigating Shortcuts in Deep Learning

no code implementations17 Feb 2024 Hadi M. Dolatabadi, Sarah M. Erfani, Christopher Leckie

Our analysis of these two failure cases of DNNs reveals that finding a unified solution for shortcut learning in DNNs is not out of reach, and TDA can play a significant role in forming such a framework.

Decision Making Topological Data Analysis

It's Simplex! Disaggregating Measures to Improve Certified Robustness

no code implementations20 Sep 2023 Andrew C. Cullen, Paul Montague, Shijie Liu, Sarah M. Erfani, Benjamin I. P. Rubinstein

Certified robustness circumvents the fragility of defences against adversarial attacks, by endowing model predictions with guarantees of class invariance for attacks up to a calculated size.

Towards quantum enhanced adversarial robustness in machine learning

no code implementations22 Jun 2023 Maxwell T. West, Shu-Lok Tsang, Jia S. Low, Charles D. Hill, Christopher Leckie, Lloyd C. L. Hollenberg, Sarah M. Erfani, Muhammad Usman

Machine learning algorithms are powerful tools for data driven tasks such as image classification and feature detection, however their vulnerability to adversarial examples - input samples manipulated to fool the algorithm - remains a serious challenge.

Adversarial Robustness Computational Efficiency +1

Et Tu Certifications: Robustness Certificates Yield Better Adversarial Examples

no code implementations9 Feb 2023 Andrew C. Cullen, Shijie Liu, Paul Montague, Sarah M. Erfani, Benjamin I. P. Rubinstein

In guaranteeing the absence of adversarial examples in an instance's neighbourhood, certification mechanisms play an important role in demonstrating neural net robustness.

Hybrid Quantum-Classical Generative Adversarial Network for High Resolution Image Generation

no code implementations22 Dec 2022 Shu Lok Tsang, Maxwell T. West, Sarah M. Erfani, Muhammad Usman

A subclass of QML methods is quantum generative adversarial networks (QGANs) which have been studied as a quantum counterpart of classical GANs widely used in image manipulation and generation tasks.

Dimensionality Reduction Generative Adversarial Network +4

Double Bubble, Toil and Trouble: Enhancing Certified Robustness through Transitivity

1 code implementation12 Oct 2022 Andrew C. Cullen, Paul Montague, Shijie Liu, Sarah M. Erfani, Benjamin I. P. Rubinstein

In response to subtle adversarial examples flipping classifications of neural network models, recent research has promoted certified robustness as a solution.

Open-Ended Question Answering

Performance analysis of coreset selection for quantum implementation of K-Means clustering algorithm

no code implementations16 Jun 2022 Fanzhe Qu, Sarah M. Erfani, Muhammad Usman

However, the impact of coreset selection on the performance of quantum K-Means clustering has not been explored.

Clustering

Local Intrinsic Dimensionality Signals Adversarial Perturbations

1 code implementation24 Sep 2021 Sandamal Weerasinghe, Tansu Alpcan, Sarah M. Erfani, Christopher Leckie, Benjamin I. P. Rubinstein

In this paper, we derive a lower-bound and an upper-bound for the LID value of a perturbed data point and demonstrate that the bounds, in particular the lower-bound, has a positive correlation with the magnitude of the perturbation.

BIG-bench Machine Learning

A Deep Adversarial Model for Suffix and Remaining Time Prediction of Event Sequences

1 code implementation15 Feb 2021 Farbod Taymouri, Marcello La Rosa, Sarah M. Erfani

The results show improvements up to four times compared to the state of the art in suffix and remaining time prediction of event sequences, specifically in the realm of business process executions.

Management

Neural Architecture Search via Combinatorial Multi-Armed Bandit

no code implementations1 Jan 2021 Hanxun Huang, Xingjun Ma, Sarah M. Erfani, James Bailey

NAS can be performed via policy gradient, evolutionary algorithms, differentiable architecture search or tree-search methods.

Evolutionary Algorithms Neural Architecture Search

Improving Scalability of Contrast Pattern Mining for Network Traffic Using Closed Patterns

no code implementations16 Nov 2020 Elaheh AlipourChavary, Sarah M. Erfani, Christopher Leckie

In addition, as an application of CPs, we demonstrate that CPM is a highly effective method for detection of meaningful changes in network traffic.

Defending Regression Learners Against Poisoning Attacks

1 code implementation21 Aug 2020 Sandamal Weerasinghe, Sarah M. Erfani, Tansu Alpcan, Christopher Leckie, Justin Kopacz

Regression models, which are widely used from engineering applications to financial forecasting, are vulnerable to targeted malicious attacks such as training data poisoning, through which adversaries can manipulate their predictions.

Data Poisoning regression

Defending Distributed Classifiers Against Data Poisoning Attacks

1 code implementation21 Aug 2020 Sandamal Weerasinghe, Tansu Alpcan, Sarah M. Erfani, Christopher Leckie

We introduce a weighted SVM against such attacks using K-LID as a distinguishing characteristic that de-emphasizes the effect of suspicious data samples on the SVM decision boundary.

Data Poisoning

Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality

1 code implementation ICLR 2018 Xingjun Ma, Bo Li, Yisen Wang, Sarah M. Erfani, Sudanthi Wijewickrema, Grant Schoenebeck, Dawn Song, Michael E. Houle, James Bailey

Deep Neural Networks (DNNs) have recently been shown to be vulnerable against adversarial examples, which are carefully crafted instances that can mislead DNNs to make errors during prediction.

Adversarial Defense

Online Cluster Validity Indices for Streaming Data

no code implementations8 Jan 2018 Masud Moshtaghi, James C. Bezdek, Sarah M. Erfani, Christopher Leckie, James Bailey

An important part of cluster analysis is validating the quality of computationally obtained clusters.

Clustering

Toward the Starting Line: A Systems Engineering Approach to Strong AI

no code implementations28 Jul 2017 Tansu Alpcan, Sarah M. Erfani, Christopher Leckie

After many hype cycles and lessons from AI history, it is clear that a big conceptual leap is needed for crossing the starting line to kick-start mainstream AGI research.

Position

Cannot find the paper you are looking for? You can Submit a new open access paper.