1 code implementation • 14 Feb 2020 • Laleh Seyyed-Kalantari, Guanxiong Liu, Matthew McDermott, Irene Y. Chen, Marzyeh Ghassemi
We demonstrate that TPR disparities exist in the state-of-the-art classifiers in all datasets, for all clinical tasks, and all subgroups.
Ranked #1 on Multi-Label Classification on ChestX-ray14
1 code implementation • 4 Apr 2019 • Guanxiong Liu, Tzu-Ming Harry Hsu, Matthew McDermott, Willie Boag, Wei-Hung Weng, Peter Szolovits, Marzyeh Ghassemi
The automatic generation of radiology reports given medical radiographs has significant potential to operationally and improve clinical patient care.
1 code implementation • 17 Apr 2019 • Guanxiong Liu, Issa Khalil, Abdallah Khreishah
Neural Network classifiers have been used successfully in a wide range of applications.
no code implementations • 6 Mar 2019 • Guanxiong Liu, Issa Khalil, Abdallah Khreishah
Machine learning models, especially neural network (NN) classifiers, are widely used in many applications including natural language processing, computer vision and cybersecurity.
no code implementations • 27 Jun 2019 • Guanxiong Liu, Issa Khalil, Abdallah Khreishah
Due to the surprisingly good representation power of complex distributions, neural network (NN) classifiers are widely used in many tasks which include natural language processing, computer vision and cyber security.
no code implementations • 22 Feb 2020 • Guanxiong Liu, Issa Khalil, Abdallah Khreishah
Single-Step adversarial training methods have been proposed as computationally viable solutions, however they still fail to defend against iterative adversarial examples.
no code implementations • 11 Jul 2020 • Guanxiong Liu, Issa Khalil, Abdallah Khreishah, Abdulelah Algosaibi, Adel Aldalbahi, Mohammed Alaneem, Abdulaziz Alhumam, Mohammed Anan
Through extensive set of experiments on different datasets, we show that (1) adversarial examples generated by ManiGen can mislead standalone classifiers by being as successful as the state-of-the-art white-box generator, Carlini, and (2) adversarial examples generated by ManiGen can more effectively attack classifiers with state-of-the-art defenses.
no code implementations • 1 Jan 2021 • Guanxiong Liu, Issa Khalil, Abdallah Khreishah, Hai Phan
In this work, we naturally unify adversarial examples and Trojan backdoors into a new stealthy attack, that is activated only when 1) adversarial perturbation is injected into the input examples and 2) a Trojan backdoor is used to poison the training process simultaneously.
no code implementations • 3 Sep 2021 • Guanxiong Liu, Issa Khalil, Abdallah Khreishah, NhatHai Phan
In this work, we show how to jointly exploit adversarial perturbation and model poisoning vulnerabilities to practically launch a new stealthy attack, dubbed AdvTrojan.
no code implementations • 7 Sep 2021 • Guanxiong Liu, Hang Shi, Abbas Kiani, Abdallah Khreishah, Jo Young Lee, Nirwan Ansari, Chengjun Liu, Mustafa Yousef
In this paper, we focus on two common traffic monitoring tasks, congestion detection, and speed detection, and propose a two-tier edge computing based model that takes into account of both the limited computing capability in cloudlets and the unstable network condition to the TMC.
no code implementations • 5 Sep 2022 • Guanxiong Liu, Abdallah Khreishah, Fatima Sharadgah, Issa Khalil
Through mathematical analysis, we show that if the attacker is perfect in injecting the backdoor, the Trojan infected model will be trained to learn the appropriate prediction confidence bound, which is used to distinguish Trojan and benign inputs under arbitrary perturbations.