no code implementations • 29 Aug 2023 • Sahar Sadrizadeh, Ljiljana Dolamic, Pascal Frossard
To evaluate the robustness of NMT models to our attack, we propose enhancements to existing black-box word-replacement-based attacks by incorporating output translations of the target NMT model and the output logits of a classifier within the attack process.
no code implementations • 14 Jun 2023 • Sahar Sadrizadeh, Clément Barbier, Ljiljana Dolamic, Pascal Frossard
First, we propose an optimization problem to generate adversarial examples that are semantically similar to the original sentences but destroy the translation generated by the target NMT model.
1 code implementation • 2 Mar 2023 • Sahar Sadrizadeh, AmirHossein Dabiri Aghdam, Ljiljana Dolamic, Pascal Frossard
In this paper, we propose a new targeted adversarial attack against NMT models.
1 code implementation • 2 Feb 2023 • Sahar Sadrizadeh, Ljiljana Dolamic, Pascal Frossard
Deep neural networks have been shown to be vulnerable to small perturbations of their inputs, known as adversarial attacks.
1 code implementation • 11 Mar 2022 • Sahar Sadrizadeh, Ljiljana Dolamic, Pascal Frossard
Recently, it has been shown that, in spite of the significant performance of deep neural networks in different fields, those are vulnerable to adversarial examples.