Search Results for author: Ali Rahmati

Found 6 papers, 2 papers with code

CGBA: Curvature-aware Geometric Black-box Attack

1 code implementation ICCV 2023 Md Farhamdur Reza, Ali Rahmati, Tianfu Wu, Huaiyu Dai

While the proposed CGBA attack can work effectively for an arbitrary decision boundary, it is particularly efficient in exploiting the low curvature to craft high-quality adversarial examples, which is widely seen and experimentally verified in commonly used classifiers under non-targeted attacks.

BERT-DRE: BERT with Deep Recursive Encoder for Natural Language Sentence Matching

no code implementations3 Nov 2021 Ehsan Tavan, Ali Rahmati, Maryam Najafi, Saeed Bibak, Zahed Rahmati

Three Bi-LSTM layers with residual connection are used to design a recursive encoder and an attention module is used on top of this encoder.

Natural Language Inference Sentence

On the exploitative behavior of adversarial training against adversarial attacks

no code implementations29 Sep 2021 Ali Rahmati, Seyed-Mohsen Moosavi-Dezfooli, Huaiyu Dai

Adversarial attacks have been developed as intentionally designed perturbations added to the inputs in order to fool deep neural network classifiers.

Adversarial training may be a double-edged sword

no code implementations24 Jul 2021 Ali Rahmati, Seyed-Mohsen Moosavi-Dezfooli, Huaiyu Dai

Adversarial training has been shown as an effective approach to improve the robustness of image classifiers against white-box attacks.

Lifetime Maximization for UAV-assisted Data Gathering Networks in the Presence of Jamming

no code implementations10 May 2020 Ali Rahmati, Seyyedali Hosseinalipour, Ismail Guvenc, Huaiyu Dai, Arupjyoti Bhuyan

Deployment of unmanned aerial vehicles (UAVs) is recently getting significant attention due to a variety of practical use cases, such as surveillance, data gathering, and commodity delivery.

GeoDA: a geometric framework for black-box adversarial attacks

1 code implementation CVPR 2020 Ali Rahmati, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard, Huaiyu Dai

We propose a geometric framework to generate adversarial examples in one of the most challenging black-box settings where the adversary can only generate a small number of queries, each of them returning the top-$1$ label of the classifier.

Cannot find the paper you are looking for? You can Submit a new open access paper.