Search Results for author: Lukas Huber

Found 2 papers, 1 papers with code

Detecting Word-Level Adversarial Text Attacks via SHapley Additive exPlanations

no code implementations RepL4NLP (ACL) 2022 Edoardo Mosca, Lukas Huber, Marc Alexander Kühn, Georg Groh

State-of-the-art machine learning models are prone to adversarial attacks”:" Maliciously crafted inputs to fool the model into making a wrong prediction, often with high confidence.

Adversarial Text

Cannot find the paper you are looking for? You can Submit a new open access paper.