Search Results for author: Sarah Erfani

Found 25 papers, 19 papers with code

Synthnet: Learning synthesizers end-to-end

no code implementations ICLR 2019 Florin Schimbinschi, Christian Walder, Sarah Erfani, James Bailey

Learning synthesizers and generating music in the raw audio domain is a challenging task.

Round Trip Translation Defence against Large Language Model Jailbreaking Attacks

1 code implementation21 Feb 2024 Canaan Yung, Hadi Mohaghegh Dolatabadi, Sarah Erfani, Christopher Leckie

To address this issue, we propose the Round Trip Translation (RTT) method, the first algorithm specifically designed to defend against social-engineered attacks on LLMs.

Language Modelling Large Language Model +1

OIL-AD: An Anomaly Detection Framework for Sequential Decision Sequences

no code implementations7 Feb 2024 Chen Wang, Sarah Erfani, Tansu Alpcan, Christopher Leckie

Our offline learning model is an adaptation of behavioural cloning with a transformer policy network, where we modify the training process to learn a Q function and a state value function from normal trajectories.

Anomaly Detection Behavioural cloning +2

The Devil's Advocate: Shattering the Illusion of Unexploitable Data using Diffusion Models

1 code implementation15 Mar 2023 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

In particular, we leverage the power of diffusion models and show that a carefully designed denoising process can counteract the effectiveness of the data-protecting perturbations.

Denoising

Distilling Cognitive Backdoor Patterns within an Image

1 code implementation26 Jan 2023 Hanxun Huang, Xingjun Ma, Sarah Erfani, James Bailey

We conduct extensive experiments to show that CD can robustly detect a wide range of advanced backdoor attacks.

COLLIDER: A Robust Training Framework for Backdoor Data

1 code implementation13 Oct 2022 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

We show the effectiveness of the proposed method for robust training of DNNs on various poisoned datasets, reducing the backdoor success rate significantly.

Adversarial Coreset Selection for Efficient Robust Training

1 code implementation13 Sep 2022 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

By leveraging the theory of coreset selection, we show how selecting a small subset of training data provides a principled approach to reducing the time complexity of robust training.

Detecting Arbitrary Order Beneficial Feature Interactions for Recommender Systems

1 code implementation28 Jun 2022 Yixin Su, Yunxiang Zhao, Sarah Erfani, Junhao Gan, Rui Zhang

Detecting beneficial feature interactions is essential in recommender systems, and existing approaches achieve this by examining all the possible feature interactions.

Recommendation Systems

Robust Task-Oriented Dialogue Generation with Contrastive Pre-training and Adversarial Filtering

no code implementations20 May 2022 Shiquan Yang, Xinting Huang, Jey Han Lau, Sarah Erfani

Data artifacts incentivize machine learning models to learn non-transferable generalizations by taking advantage of shortcuts in the data, and there is growing evidence that data artifacts play a role for the strong results that deep learning models achieve in recent natural language processing benchmarks.

Contrastive Learning Dialogue Generation

An Interpretable Neuro-Symbolic Reasoning Framework for Task-Oriented Dialogue Generation

1 code implementation ACL 2022 Shiquan Yang, Rui Zhang, Sarah Erfani, Jey Han Lau

To obtain a transparent reasoning process, we introduce neuro-symbolic to perform explicit reasoning that justifies model decisions by reasoning chains.

Dialogue Generation Task-Oriented Dialogue Systems +1

$\ell_\infty$-Robustness and Beyond: Unleashing Efficient Adversarial Training

2 code implementations1 Dec 2021 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

Neural networks are vulnerable to adversarial attacks: adding well-crafted, imperceptible perturbations to their input can modify their output.

Improving Robustness with Optimal Transport based Adversarial Generalization

no code implementations29 Sep 2021 Siqi Xia, Shijie Liu, Trung Le, Dinh Phung, Sarah Erfani, Benjamin I. P. Rubinstein, Christopher Leckie, Paul Montague

More specifically, by minimizing the WS distance of interest, an adversarial example is pushed toward the cluster of benign examples sharing the same label on the latent space, which helps to strengthen the generalization ability of the classifier on the adversarial examples.

Neural Graph Matching based Collaborative Filtering

1 code implementation10 May 2021 Yixin Su, Rui Zhang, Sarah Erfani, Junhao Gan

User and item attributes are essential side-information; their interactions (i. e., their co-occurrence in the sample data) can significantly enhance prediction accuracy in various recommender systems.

Attribute Collaborative Filtering +3

Learning Non-Unique Segmentation with Reward-Penalty Dice Loss

1 code implementation23 Sep 2020 Jiabo He, Sarah Erfani, Sudanthi Wijewickrema, Stephen O'Leary, Kotagiri Ramamohanarao

Semantic segmentation is one of the key problems in the field of computer vision, as it enables computer image understanding.

Medical Image Segmentation Segmentation +1

Detecting Beneficial Feature Interactions for Recommender Systems

4 code implementations2 Aug 2020 Yixin Su, Rui Zhang, Sarah Erfani, Zhenghua Xu

To make the best out of feature interactions, we propose a graph neural network approach to effectively model them, together with a novel technique to automatically detect those feature interactions that are beneficial in terms of recommendation accuracy.

Graph Classification Recommendation Systems

AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows

1 code implementation NeurIPS 2020 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

Deep learning classifiers are susceptible to well-crafted, imperceptible variations of their inputs, known as adversarial attacks.

Adversarial Attack

Black-box Adversarial Example Generation with Normalizing Flows

1 code implementation6 Jul 2020 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

Deep neural network classifiers suffer from adversarial vulnerability: well-crafted, unnoticeable changes to the input data can affect the classifier decision.

Adversarial Attack

Normalized Loss Functions for Deep Learning with Noisy Labels

4 code implementations ICML 2020 Xingjun Ma, Hanxun Huang, Yisen Wang, Simone Romano, Sarah Erfani, James Bailey

However, in practice, simply being robust is not sufficient for a loss function to train accurate DNNs.

Ranked #30 on Image Classification on mini WebVision 1.0 (ImageNet Top-1 Accuracy metric)

Learning with noisy labels

Predictive Business Process Monitoring via Generative Adversarial Nets: The Case of Next Event Prediction

1 code implementation25 Mar 2020 Farbod Taymouri, Marcello La Rosa, Sarah Erfani, Zahra Dasht Bozorgi, Ilya Verenich

Predictive process monitoring aims to predict future characteristics of an ongoing process case, such as case outcome or remaining timestamp.

Predictive Process Monitoring

Invertible Generative Modeling using Linear Rational Splines

1 code implementation15 Jan 2020 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

The significant advantage of such models is their easy-to-compute inverse.

Adversarial Reinforcement Learning under Partial Observability in Autonomous Computer Network Defence

no code implementations25 Feb 2019 Yi Han, David Hubczenko, Paul Montague, Olivier De Vel, Tamas Abraham, Benjamin I. P. Rubinstein, Christopher Leckie, Tansu Alpcan, Sarah Erfani

Recent studies have demonstrated that reinforcement learning (RL) agents are susceptible to adversarial manipulation, similar to vulnerabilities previously demonstrated in the supervised learning setting.

reinforcement-learning Reinforcement Learning (RL)

Reinforcement Learning for Autonomous Defence in Software-Defined Networking

no code implementations17 Aug 2018 Yi Han, Benjamin I. P. Rubinstein, Tamas Abraham, Tansu Alpcan, Olivier De Vel, Sarah Erfani, David Hubczenko, Christopher Leckie, Paul Montague

Despite the successful application of machine learning (ML) in a wide range of domains, adaptability---the very property that makes machine learning desirable---can be exploited by adversaries to contaminate training and evade classification.

BIG-bench Machine Learning General Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.