Search Results for author: Eric Chan-Tin

Found 4 papers, 0 papers with code

Unveiling Vulnerabilities in Interpretable Deep Learning Systems with Query-Efficient Black-box Attacks

no code implementations21 Jul 2023 Eldor Abdukhamidov, Mohammed Abuhamad, Simon S. Woo, Eric Chan-Tin, Tamer Abuhmed

Deep learning has been rapidly employed in many applications revolutionizing many industries, but it is known to be vulnerable to adversarial attacks.

Microbial Genetic Algorithm-based Black-box Attack against Interpretable Deep Learning Systems

no code implementations13 Jul 2023 Eldor Abdukhamidov, Mohammed Abuhamad, Simon S. Woo, Eric Chan-Tin, Tamer Abuhmed

Our results show that the proposed approach is query-efficient with a high attack success rate that can reach between 95% and 100% and transferability with an average success rate of 69% in the ImageNet and CIFAR datasets.

Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial Perturbations against Interpretable Deep Learning

no code implementations29 Nov 2022 Eldor Abdukhamidov, Mohammed Abuhamad, Simon S. Woo, Eric Chan-Tin, Tamer Abuhmed

We assess the effectiveness of proposed attacks against two deep learning model architectures coupled with four interpretation models that represent different categories of interpretation models.

DP-ADMM: ADMM-based Distributed Learning with Differential Privacy

no code implementations30 Aug 2018 Zonghao Huang, Rui Hu, Yuanxiong Guo, Eric Chan-Tin, Yanmin Gong

The goal of this paper is to provide differential privacy for ADMM-based distributed machine learning.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.