Search Results for author: Christopher Leckie

Found 38 papers, 15 papers with code

Graph Neural Networks with Continual Learning for Fake News Detection from Social Media

2 code implementations7 Jul 2020 Yi Han, Shanika Karunasekera, Christopher Leckie

(2) GNNs trained on a given dataset may perform poorly on new, unseen data, and direct incremental training cannot solve the problem---this issue has not been addressed in the previous work that applies GNNs for fake news detection.

Continual Learning Fact Checking +1

Black-box Adversarial Example Generation with Normalizing Flows

1 code implementation6 Jul 2020 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

Deep neural network classifiers suffer from adversarial vulnerability: well-crafted, unnoticeable changes to the input data can affect the classifier decision.

Adversarial Attack

AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows

1 code implementation NeurIPS 2020 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

Deep learning classifiers are susceptible to well-crafted, imperceptible variations of their inputs, known as adversarial attacks.

Adversarial Attack

Cross-Domain Graph Anomaly Detection via Anomaly-aware Contrastive Alignment

1 code implementation2 Dec 2022 Qizhou Wang, Guansong Pang, Mahsa Salehi, Wray Buntine, Christopher Leckie

In doing so, ACT effectively transfers anomaly-informed knowledge from the source graph to learn the complex node relations of the normal class for GAD on the target graph without any specification of the anomaly distributions.

Contrastive Learning Domain Adaptation +1

Invertible Generative Modeling using Linear Rational Splines

1 code implementation15 Jan 2020 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

The significant advantage of such models is their easy-to-compute inverse.

$\ell_\infty$-Robustness and Beyond: Unleashing Efficient Adversarial Training

2 code implementations1 Dec 2021 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

Neural networks are vulnerable to adversarial attacks: adding well-crafted, imperceptible perturbations to their input can modify their output.

Adversarial Coreset Selection for Efficient Robust Training

1 code implementation13 Sep 2022 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

By leveraging the theory of coreset selection, we show how selecting a small subset of training data provides a principled approach to reducing the time complexity of robust training.

Defending Distributed Classifiers Against Data Poisoning Attacks

1 code implementation21 Aug 2020 Sandamal Weerasinghe, Tansu Alpcan, Sarah M. Erfani, Christopher Leckie

We introduce a weighted SVM against such attacks using K-LID as a distinguishing characteristic that de-emphasizes the effect of suspicious data samples on the SVM decision boundary.

Data Poisoning

Online Cluster Validity Indices for Streaming Data

no code implementations8 Jan 2018 Masud Moshtaghi, James C. Bezdek, Sarah M. Erfani, Christopher Leckie, James Bailey

An important part of cluster analysis is validating the quality of computationally obtained clusters.

Clustering

Toward the Starting Line: A Systems Engineering Approach to Strong AI

no code implementations28 Jul 2017 Tansu Alpcan, Sarah M. Erfani, Christopher Leckie

After many hype cycles and lessons from AI history, it is clear that a big conceptual leap is needed for crossing the starting line to kick-start mainstream AGI research.

Position

Large-Scale Strategic Games and Adversarial Machine Learning

no code implementations21 Sep 2016 Tansu Alpcan, Benjamin I. P. Rubinstein, Christopher Leckie

Such high-dimensional decision spaces and big data sets lead to computational challenges, relating to efforts in non-linear optimization scaling up to large systems of variables.

BIG-bench Machine Learning Decision Making

Reinforcement Learning for Autonomous Defence in Software-Defined Networking

no code implementations17 Aug 2018 Yi Han, Benjamin I. P. Rubinstein, Tamas Abraham, Tansu Alpcan, Olivier De Vel, Sarah Erfani, David Hubczenko, Christopher Leckie, Paul Montague

Despite the successful application of machine learning (ML) in a wide range of domains, adaptability---the very property that makes machine learning desirable---can be exploited by adversaries to contaminate training and evade classification.

BIG-bench Machine Learning General Classification +2

Scalable Bottom-up Subspace Clustering using FP-Trees for High Dimensional Data

no code implementations7 Nov 2018 Minh Tuan Doan, Jianzhong Qi, Sutharshan Rajasegarar, Christopher Leckie

Subspace clustering aims to find groups of similar objects (clusters) that exist in lower dimensional subspaces from a high dimensional dataset.

Clustering

Unsupervised Adversarial Anomaly Detection using One-Class Support Vector Machines

no code implementations ICLR 2018 Prameesha Sandamal Weerasinghe, Tansu Alpcan, Sarah Monazam Erfani, Christopher Leckie

Anomaly detection discovers regular patterns in unlabeled data and identifies the non-conforming data points, which in some cases are the result of malicious attacks by adversaries.

Anomaly Detection BIG-bench Machine Learning

Adversarial Reinforcement Learning under Partial Observability in Autonomous Computer Network Defence

no code implementations25 Feb 2019 Yi Han, David Hubczenko, Paul Montague, Olivier De Vel, Tamas Abraham, Benjamin I. P. Rubinstein, Christopher Leckie, Tansu Alpcan, Sarah Erfani

Recent studies have demonstrated that reinforcement learning (RL) agents are susceptible to adversarial manipulation, similar to vulnerabilities previously demonstrated in the supervised learning setting.

reinforcement-learning Reinforcement Learning (RL)

USTAR: Online Multimodal Embedding for Modeling User-Guided Spatiotemporal Activity

no code implementations23 Oct 2019 Amila Silva, Shanika Karunasekera, Christopher Leckie, Ling Luo

Building spatiotemporal activity models for people's activities in urban spaces is important for understanding the ever-increasing complexity of urban dynamics.

Collaborative Filtering Event Detection +1

Image Analysis Enhanced Event Detection from Geo-tagged Tweet Streams

no code implementations11 Feb 2020 Yi Han, Shanika Karunasekera, Christopher Leckie

Events detected from social media streams often include early signs of accidents, crimes or disasters.

Event Detection

OMBA: User-Guided Product Representations for Online Market Basket Analysis

no code implementations18 Jun 2020 Amila Silva, Ling Luo, Shanika Karunasekera, Christopher Leckie

OMBA jointly learns representations for products and users such that they preserve the temporal dynamics of product-to-product and user-to-product associations.

Decision Making Representation Learning

METEOR: Learning Memory and Time Efficient Representations from Multi-modal Data Streams

no code implementations23 Jul 2020 Amila Silva, Shanika Karunasekera, Christopher Leckie, Ling Luo

To address this problem, we present METEOR, a novel MEmory and Time Efficient Online Representation learning technique, which: (1) learns compact representations for multi-modal data by sharing parameters within semantically meaningful groups and preserves the domain-agnostic semantics; (2) can be accelerated using parallel processes to accommodate different stream rates while capturing the temporal changes of the units; and (3) can be easily extended to capture implicit/explicit external knowledge related to multi-modal data streams.

Representation Learning

Defending Regression Learners Against Poisoning Attacks

1 code implementation21 Aug 2020 Sandamal Weerasinghe, Sarah M. Erfani, Tansu Alpcan, Christopher Leckie, Justin Kopacz

Regression models, which are widely used from engineering applications to financial forecasting, are vulnerable to targeted malicious attacks such as training data poisoning, through which adversaries can manipulate their predictions.

Data Poisoning regression

Improving Scalability of Contrast Pattern Mining for Network Traffic Using Closed Patterns

no code implementations16 Nov 2020 Elaheh AlipourChavary, Sarah M. Erfani, Christopher Leckie

In addition, as an application of CPs, we demonstrate that CPM is a highly effective method for detection of meaningful changes in network traffic.

Divide and Learn: A Divide and Conquer Approach for Predict+Optimize

no code implementations4 Dec 2020 Ali Ugur Guler, Emir Demirovic, Jeffrey Chan, James Bailey, Christopher Leckie, Peter J. Stuckey

We compare our approach withother approaches to the predict+optimize problem and showwe can successfully tackle some hard combinatorial problemsbetter than other predict+optimize methods.

Combinatorial Optimization

Embracing Domain Differences in Fake News: Cross-domain Fake News Detection using Multi-modal Data

no code implementations11 Feb 2021 Amila Silva, Ling Luo, Shanika Karunasekera, Christopher Leckie

Hence, this work: (1) proposes a novel framework that jointly preserves domain-specific and cross-domain knowledge in news records to detect fake news from different domains; and (2) introduces an unsupervised technique to select a set of unlabelled informative news records for manual labelling, which can be ultimately used to train a fake news detection model that performs well for many domains while minimizing the labelling cost.

Fake News Detection

Local Intrinsic Dimensionality Signals Adversarial Perturbations

1 code implementation24 Sep 2021 Sandamal Weerasinghe, Tansu Alpcan, Sarah M. Erfani, Christopher Leckie, Benjamin I. P. Rubinstein

In this paper, we derive a lower-bound and an upper-bound for the LID value of a perturbed data point and demonstrate that the bounds, in particular the lower-bound, has a positive correlation with the magnitude of the perturbation.

BIG-bench Machine Learning

Improving Robustness with Optimal Transport based Adversarial Generalization

no code implementations29 Sep 2021 Siqi Xia, Shijie Liu, Trung Le, Dinh Phung, Sarah Erfani, Benjamin I. P. Rubinstein, Christopher Leckie, Paul Montague

More specifically, by minimizing the WS distance of interest, an adversarial example is pushed toward the cluster of benign examples sharing the same label on the latent space, which helps to strengthen the generalization ability of the classifier on the adversarial examples.

COLLIDER: A Robust Training Framework for Backdoor Data

1 code implementation13 Oct 2022 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

We show the effectiveness of the proposed method for robust training of DNNs on various poisoned datasets, reducing the backdoor success rate significantly.

The Devil's Advocate: Shattering the Illusion of Unexploitable Data using Diffusion Models

1 code implementation15 Mar 2023 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

In particular, we leverage the power of diffusion models and show that a carefully designed denoising process can counteract the effectiveness of the data-protecting perturbations.

Denoising

Failure-tolerant Distributed Learning for Anomaly Detection in Wireless Networks

no code implementations23 Mar 2023 Marc Katzef, Andrew C. Cullen, Tansu Alpcan, Christopher Leckie, Justin Kopacz

When such failures arise in wireless communications networks, important services that they use/provide (like anomaly detection) can be left inoperable and can result in a cascade of security problems.

Anomaly Detection Federated Learning

Towards quantum enhanced adversarial robustness in machine learning

no code implementations22 Jun 2023 Maxwell T. West, Shu-Lok Tsang, Jia S. Low, Charles D. Hill, Christopher Leckie, Lloyd C. L. Hollenberg, Sarah M. Erfani, Muhammad Usman

Machine learning algorithms are powerful tools for data driven tasks such as image classification and feature detection, however their vulnerability to adversarial examples - input samples manipulated to fool the algorithm - remains a serious challenge.

Adversarial Robustness Computational Efficiency +1

Open-Set Graph Anomaly Detection via Normal Structure Regularisation

no code implementations12 Nov 2023 Qizhou Wang, Guansong Pang, Mahsa Salehi, Christopher Leckie

However, current methods tend to over-emphasise fitting the seen anomalies, leading to a weak generalisation ability to detect unseen anomalies, i. e., those that are not illustrated by the labelled anomaly nodes.

Graph Anomaly Detection Supervised Anomaly Detection

OIL-AD: An Anomaly Detection Framework for Sequential Decision Sequences

no code implementations7 Feb 2024 Chen Wang, Sarah Erfani, Tansu Alpcan, Christopher Leckie

Our offline learning model is an adaptation of behavioural cloning with a transformer policy network, where we modify the training process to learn a Q function and a state value function from normal trajectories.

Anomaly Detection Behavioural cloning +2

Be Persistent: Towards a Unified Solution for Mitigating Shortcuts in Deep Learning

no code implementations17 Feb 2024 Hadi M. Dolatabadi, Sarah M. Erfani, Christopher Leckie

Our analysis of these two failure cases of DNNs reveals that finding a unified solution for shortcut learning in DNNs is not out of reach, and TDA can play a significant role in forming such a framework.

Decision Making Topological Data Analysis

Round Trip Translation Defence against Large Language Model Jailbreaking Attacks

1 code implementation21 Feb 2024 Canaan Yung, Hadi Mohaghegh Dolatabadi, Sarah Erfani, Christopher Leckie

To address this issue, we propose the Round Trip Translation (RTT) method, the first algorithm specifically designed to defend against social-engineered attacks on LLMs.

Language Modelling Large Language Model +1

Cannot find the paper you are looking for? You can Submit a new open access paper.