no code implementations • 18 Mar 2024 • Anh Bui, Khanh Doan, Trung Le, Paul Montague, Tamas Abraham, Dinh Phung
Generative models have demonstrated remarkable potential in generating visually impressive content from textual descriptions.
no code implementations • 21 Jun 2022 • Shuiqiao Yang, Bao Gia Doan, Paul Montague, Olivier De Vel, Tamas Abraham, Seyit Camtepe, Damith C. Ranasinghe, Salil S. Kanhere
In this paper, we disclose the TRAP attack, a Transferable GRAPh backdoor attack.
no code implementations • 13 Oct 2020 • He Zhao, Thanh Nguyen, Trung Le, Paul Montague, Olivier De Vel, Tamas Abraham, Dinh Phung
Deep neural network image classifiers are reported to be susceptible to adversarial evasion attacks, which use carefully crafted images created to mislead a classifier.
1 code implementation • 21 Sep 2020 • Anh Bui, Trung Le, He Zhao, Paul Montague, Olivier deVel, Tamas Abraham, Dinh Phung
An important technique of this approach is to control the transferability of adversarial examples among ensemble members.
1 code implementation • ECCV 2020 • Anh Bui, Trung Le, He Zhao, Paul Montague, Olivier deVel, Tamas Abraham, Dinh Phung
The fact that deep neural networks are susceptible to crafted perturbations severely impacts the use of deep learning in certain domains of application.
no code implementations • 3 Oct 2019 • He Zhao, Trung Le, Paul Montague, Olivier De Vel, Tamas Abraham, Dinh Phung
Deep neural network image classifiers are reported to be susceptible to adversarial evasion attacks, which use carefully crafted images created to mislead a classifier.
no code implementations • 25 Feb 2019 • Yi Han, David Hubczenko, Paul Montague, Olivier De Vel, Tamas Abraham, Benjamin I. P. Rubinstein, Christopher Leckie, Tansu Alpcan, Sarah Erfani
Recent studies have demonstrated that reinforcement learning (RL) agents are susceptible to adversarial manipulation, similar to vulnerabilities previously demonstrated in the supervised learning setting.
no code implementations • 17 Aug 2018 • Yi Han, Benjamin I. P. Rubinstein, Tamas Abraham, Tansu Alpcan, Olivier De Vel, Sarah Erfani, David Hubczenko, Christopher Leckie, Paul Montague
Despite the successful application of machine learning (ML) in a wide range of domains, adaptability---the very property that makes machine learning desirable---can be exploited by adversaries to contaminate training and evade classification.