no code implementations • ICLR 2019 • Paul Michel, Graham Neubig, Xi-An Li, Juan Miguel Pino
Adversarial examples have been shown to be an effective way of assessing the robustness of neural sequence-to-sequence (seq2seq) models, by applying perturbations to the input of a model leading to large degradation in performance.
1 code implementation • NAACL 2019 • Paul Michel, Xi-An Li, Graham Neubig, Juan Miguel Pino
Adversarial examples --- perturbations to the input of a model that elicit large changes in the output --- have been shown to be an effective way of assessing the robustness of sequence-to-sequence (seq2seq) models.