1 code implementation • 12 Jun 2023 • Nicholas Boucher, Jenny Blessing, Ilia Shumailov, Ross Anderson, Nicolas Papernot
While text-based machine learning models that operate on visual inputs of rendered text have become robust against a wide range of existing attacks, we show that they are still vulnerable to visual adversarial examples encoded as text.
1 code implementation • 27 Apr 2023 • Nicholas Boucher, Luca Pajola, Ilia Shumailov, Ross Anderson, Mauro Conti
Search engines are vulnerable to attacks against indexing and searching via text encoding manipulation.
1 code implementation • 18 Jun 2021 • Nicholas Boucher, Ilia Shumailov, Ross Anderson, Nicolas Papernot
In this paper, we explore a large class of adversarial examples that can be used to attack text-based models in a black-box setting without making any human-perceptible visual modification to inputs.