Search Results for author: Sarah Erfani

Found 16 papers, 11 papers with code

Synthnet: Learning synthesizers end-to-end

no code implementations ICLR 2019 Florin Schimbinschi, Christian Walder, Sarah Erfani, James Bailey

Learning synthesizers and generating music in the raw audio domain is a challenging task.

$\ell_\infty$-Robustness and Beyond: Unleashing Efficient Adversarial Training

no code implementations1 Dec 2021 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

Our experimental results indicate that our approach speeds up adversarial training by 2-3 times, while experiencing a small reduction in the clean and robust accuracy.

Improving Robustness with Optimal Transport based Adversarial Generalization

no code implementations29 Sep 2021 Siqi Xia, Shijie Liu, Trung Le, Dinh Phung, Sarah Erfani, Benjamin I. P. Rubinstein, Christopher Leckie, Paul Montague

More specifically, by minimizing the WS distance of interest, an adversarial example is pushed toward the cluster of benign examples sharing the same label on the latent space, which helps to strengthen the generalization ability of the classifier on the adversarial examples.

Neural Graph Matching based Collaborative Filtering

1 code implementation10 May 2021 Yixin Su, Rui Zhang, Sarah Erfani, Junhao Gan

User and item attributes are essential side-information; their interactions (i. e., their co-occurrence in the sample data) can significantly enhance prediction accuracy in various recommender systems.

Collaborative Filtering Graph Learning +2

Learning Non-Unique Segmentation with Reward-Penalty Dice Loss

1 code implementation23 Sep 2020 Jiabo He, Sarah Erfani, Sudanthi Wijewickrema, Stephen O'Leary, Kotagiri Ramamohanarao

Semantic segmentation is one of the key problems in the field of computer vision, as it enables computer image understanding.

Medical Image Segmentation

Detecting Beneficial Feature Interactions for Recommender Systems

1 code implementation2 Aug 2020 Yixin Su, Rui Zhang, Sarah Erfani, Zhenghua Xu

To make the best out of feature interactions, we propose a graph neural network approach to effectively model them, together with a novel technique to automatically detect those feature interactions that are beneficial in terms of recommendation accuracy.

Graph Classification Recommendation Systems

AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows

1 code implementation NeurIPS 2020 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

Deep learning classifiers are susceptible to well-crafted, imperceptible variations of their inputs, known as adversarial attacks.

Adversarial Attack

Black-box Adversarial Example Generation with Normalizing Flows

1 code implementation6 Jul 2020 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

Deep neural network classifiers suffer from adversarial vulnerability: well-crafted, unnoticeable changes to the input data can affect the classifier decision.

Adversarial Attack

Normalized Loss Functions for Deep Learning with Noisy Labels

4 code implementations ICML 2020 Xingjun Ma, Hanxun Huang, Yisen Wang, Simone Romano, Sarah Erfani, James Bailey

However, in practice, simply being robust is not sufficient for a loss function to train accurate DNNs.

Ranked #20 on Image Classification on mini WebVision 1.0 (ImageNet Top-1 Accuracy metric)

Learning with noisy labels

Predictive Business Process Monitoring via Generative Adversarial Nets: The Case of Next Event Prediction

1 code implementation25 Mar 2020 Farbod Taymouri, Marcello La Rosa, Sarah Erfani, Zahra Dasht Bozorgi, Ilya Verenich

Predictive process monitoring aims to predict future characteristics of an ongoing process case, such as case outcome or remaining timestamp.

Predictive Process Monitoring

Invertible Generative Modeling using Linear Rational Splines

1 code implementation15 Jan 2020 Hadi M. Dolatabadi, Sarah Erfani, Christopher Leckie

The significant advantage of such models is their easy-to-compute inverse.

Adversarial Reinforcement Learning under Partial Observability in Autonomous Computer Network Defence

no code implementations25 Feb 2019 Yi Han, David Hubczenko, Paul Montague, Olivier De Vel, Tamas Abraham, Benjamin I. P. Rubinstein, Christopher Leckie, Tansu Alpcan, Sarah Erfani

Recent studies have demonstrated that reinforcement learning (RL) agents are susceptible to adversarial manipulation, similar to vulnerabilities previously demonstrated in the supervised learning setting.

Reinforcement Learning for Autonomous Defence in Software-Defined Networking

no code implementations17 Aug 2018 Yi Han, Benjamin I. P. Rubinstein, Tamas Abraham, Tansu Alpcan, Olivier De Vel, Sarah Erfani, David Hubczenko, Christopher Leckie, Paul Montague

Despite the successful application of machine learning (ML) in a wide range of domains, adaptability---the very property that makes machine learning desirable---can be exploited by adversaries to contaminate training and evade classification.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.