1 code implementation • ICCV 2023 • Md Farhamdur Reza, Ali Rahmati, Tianfu Wu, Huaiyu Dai
While the proposed CGBA attack can work effectively for an arbitrary decision boundary, it is particularly efficient in exploiting the low curvature to craft high-quality adversarial examples, which is widely seen and experimentally verified in commonly used classifiers under non-targeted attacks.
no code implementations • 3 Nov 2021 • Ehsan Tavan, Ali Rahmati, Maryam Najafi, Saeed Bibak, Zahed Rahmati
Three Bi-LSTM layers with residual connection are used to design a recursive encoder and an attention module is used on top of this encoder.
no code implementations • 29 Sep 2021 • Ali Rahmati, Seyed-Mohsen Moosavi-Dezfooli, Huaiyu Dai
Adversarial attacks have been developed as intentionally designed perturbations added to the inputs in order to fool deep neural network classifiers.
no code implementations • 24 Jul 2021 • Ali Rahmati, Seyed-Mohsen Moosavi-Dezfooli, Huaiyu Dai
Adversarial training has been shown as an effective approach to improve the robustness of image classifiers against white-box attacks.
no code implementations • 10 May 2020 • Ali Rahmati, Seyyedali Hosseinalipour, Ismail Guvenc, Huaiyu Dai, Arupjyoti Bhuyan
Deployment of unmanned aerial vehicles (UAVs) is recently getting significant attention due to a variety of practical use cases, such as surveillance, data gathering, and commodity delivery.
1 code implementation • CVPR 2020 • Ali Rahmati, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard, Huaiyu Dai
We propose a geometric framework to generate adversarial examples in one of the most challenging black-box settings where the adversary can only generate a small number of queries, each of them returning the top-$1$ label of the classifier.