Search Results for author: Sahar Sadrizadeh

Found 5 papers, 3 papers with code

A Classification-Guided Approach for Adversarial Attacks against Neural Machine Translation

no code implementations29 Aug 2023 Sahar Sadrizadeh, Ljiljana Dolamic, Pascal Frossard

To evaluate the robustness of NMT models to our attack, we propose enhancements to existing black-box word-replacement-based attacks by incorporating output translations of the target NMT model and the output logits of a classifier within the attack process.

Adversarial Attack Machine Translation +2

A Relaxed Optimization Approach for Adversarial Attacks against Neural Machine Translation Models

no code implementations14 Jun 2023 Sahar Sadrizadeh, Clément Barbier, Ljiljana Dolamic, Pascal Frossard

First, we propose an optimization problem to generate adversarial examples that are semantically similar to the original sentences but destroy the translation generated by the target NMT model.

Adversarial Attack Machine Translation +4

TransFool: An Adversarial Attack against Neural Machine Translation Models

1 code implementation2 Feb 2023 Sahar Sadrizadeh, Ljiljana Dolamic, Pascal Frossard

Deep neural networks have been shown to be vulnerable to small perturbations of their inputs, known as adversarial attacks.

Adversarial Attack Language Modelling +5

Block-Sparse Adversarial Attack to Fool Transformer-Based Text Classifiers

1 code implementation11 Mar 2022 Sahar Sadrizadeh, Ljiljana Dolamic, Pascal Frossard

Recently, it has been shown that, in spite of the significant performance of deep neural networks in different fields, those are vulnerable to adversarial examples.

Adversarial Attack Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.