no code implementations • 18 Mar 2024 • Amira Guesmi, Muhammad Abdullah Hanif, Ihsen Alouani, Bassem Ouni, Muhammad Shafique
In this paper, we introduce SSAP (Shape-Sensitive Adversarial Patch), a novel approach designed to comprehensively disrupt monocular depth estimation (MDE) in autonomous navigation applications.
no code implementations • 9 Feb 2024 • Nandish Chattopadhyay, Amira Guesmi, Muhammad Shafique
Adversarial patch attacks pose a significant threat to the practical deployment of deep learning systems.
no code implementations • 20 Nov 2023 • Nandish Chattopadhyay, Amira Guesmi, Muhammad Abdullah Hanif, Bassem Ouni, Muhammad Shafique
ODDR employs a three-stage pipeline: Fragmentation, Segregation, and Neutralization, providing a model-agnostic solution applicable to both image classification and object detection tasks.
no code implementations • 11 Aug 2023 • Amira Guesmi, Muhammad Abdullah Hanif, Bassem Ouni, Muhammed Shafique
Through this comprehensive survey, we aim to provide a valuable resource for researchers, practitioners, and policymakers to gain a holistic understanding of physical adversarial attacks in computer vision and facilitate the development of robust and secure DNN-based systems.
no code implementations • 6 Aug 2023 • Amira Guesmi, Muhammad Abdullah Hanif, Bassem Ouni, Muhammad Shafique
In this paper, we investigate the vulnerability of MDE to adversarial patches.
no code implementations • 19 May 2023 • Amira Guesmi, Ruitian Ding, Muhammad Abdullah Hanif, Ihsen Alouani, Muhammad Shafique
Patch-based adversarial attacks were proven to compromise the robustness and reliability of computer vision systems.
no code implementations • 3 Mar 2023 • Ayoub Arous, Amira Guesmi, Muhammad Abdullah Hanif, Ihsen Alouani, Muhammad Shafique
Towards investigating new ground for better privacy-utility trade-off, this work questions; (i) if models' hyperparameters have any inherent impact on ML models' privacy-preserving properties, and (ii) if models' hyperparameters have any impact on the privacy/utility trade-off of differentially private models.
no code implementations • 3 Mar 2023 • Amira Guesmi, Ioan Marius Bilasco, Muhammad Shafique, Ihsen Alouani
Physical adversarial attacks pose a significant practical threat as it deceives deep learning systems operating in the real world by producing prominent and maliciously designed physical perturbations.
no code implementations • 2 Mar 2023 • Amira Guesmi, Muhammad Abdullah Hanif, Ihsen Alouani, Muhammad Shafique
APARATE, results in a mean depth estimation error surpassing $0. 5$, significantly impacting as much as $99\%$ of the targeted region when applied to CNN-based MDE models.
no code implementations • 2 Mar 2023 • Amira Guesmi, Muhammad Abdullah Hanif, Muhammad Shafique
Unlike mask based fake-weather attacks that require access to the underlying computing hardware or image memory, our attack is based on emulating the effects of a natural weather condition (i. e., Raindrops) that can be printed on a translucent sticker, which is externally placed over the lens of a camera.
no code implementations • 18 Apr 2022 • Shail Dave, Alberto Marchisio, Muhammad Abdullah Hanif, Amira Guesmi, Aviral Shrivastava, Ihsen Alouani, Muhammad Shafique
The real-world use cases of Machine Learning (ML) have exploded over the past few years.
no code implementations • 5 Jan 2022 • Amira Guesmi, Khaled N. Khasawneh, Nael Abu-Ghazaleh, Ihsen Alouani
Thus, we propose ROOM, a novel Real-time Online-Offline attack construction Model where an offline component serves to warm up the online algorithm, making it possible to generate highly successful attacks under time constraints.
1 code implementation • 13 Jun 2020 • Amira Guesmi, Ihsen Alouani, Khaled Khasawneh, Mouna Baklouti, Tarek Frikha, Mohamed Abid, Nael Abu-Ghazaleh
We show that our approximate computing implementation achieves robustness across a wide range of attack scenarios.