Search Results for author: Tamas Abraham

Found 8 papers, 2 papers with code

Removing Undesirable Concepts in Text-to-Image Generative Models with Learnable Prompts

no code implementations18 Mar 2024 Anh Bui, Khanh Doan, Trung Le, Paul Montague, Tamas Abraham, Dinh Phung

Generative models have demonstrated remarkable potential in generating visually impressive content from textual descriptions.

Transfer Learning

Learning to Attack with Fewer Pixels: A Probabilistic Post-hoc Framework for Refining Arbitrary Dense Adversarial Attacks

no code implementations13 Oct 2020 He Zhao, Thanh Nguyen, Trung Le, Paul Montague, Olivier De Vel, Tamas Abraham, Dinh Phung

Deep neural network image classifiers are reported to be susceptible to adversarial evasion attacks, which use carefully crafted images created to mislead a classifier.

Adversarial Attack Detection

Improving Ensemble Robustness by Collaboratively Promoting and Demoting Adversarial Robustness

1 code implementation21 Sep 2020 Anh Bui, Trung Le, He Zhao, Paul Montague, Olivier deVel, Tamas Abraham, Dinh Phung

An important technique of this approach is to control the transferability of adversarial examples among ensemble members.

Adversarial Robustness

Improving Adversarial Robustness by Enforcing Local and Global Compactness

1 code implementation ECCV 2020 Anh Bui, Trung Le, He Zhao, Paul Montague, Olivier deVel, Tamas Abraham, Dinh Phung

The fact that deep neural networks are susceptible to crafted perturbations severely impacts the use of deep learning in certain domains of application.

Adversarial Robustness Clustering

Perturbations are not Enough: Generating Adversarial Examples with Spatial Distortions

no code implementations3 Oct 2019 He Zhao, Trung Le, Paul Montague, Olivier De Vel, Tamas Abraham, Dinh Phung

Deep neural network image classifiers are reported to be susceptible to adversarial evasion attacks, which use carefully crafted images created to mislead a classifier.

Adversarial Attack Translation

Adversarial Reinforcement Learning under Partial Observability in Autonomous Computer Network Defence

no code implementations25 Feb 2019 Yi Han, David Hubczenko, Paul Montague, Olivier De Vel, Tamas Abraham, Benjamin I. P. Rubinstein, Christopher Leckie, Tansu Alpcan, Sarah Erfani

Recent studies have demonstrated that reinforcement learning (RL) agents are susceptible to adversarial manipulation, similar to vulnerabilities previously demonstrated in the supervised learning setting.

reinforcement-learning Reinforcement Learning (RL)

Reinforcement Learning for Autonomous Defence in Software-Defined Networking

no code implementations17 Aug 2018 Yi Han, Benjamin I. P. Rubinstein, Tamas Abraham, Tansu Alpcan, Olivier De Vel, Sarah Erfani, David Hubczenko, Christopher Leckie, Paul Montague

Despite the successful application of machine learning (ML) in a wide range of domains, adaptability---the very property that makes machine learning desirable---can be exploited by adversaries to contaminate training and evade classification.

BIG-bench Machine Learning General Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.