no code implementations • 8 Apr 2024 • Viet Quoc Vo, Ehsan Abbasnejad, Damith C. Ranasinghe
We study the unique, less-well understood problem of generating sparse adversarial samples simply by observing the score-based replies to model queries.
1 code implementation • 31 Jan 2022 • Viet Quoc Vo, Ehsan Abbasnejad, Damith C. Ranasinghe
The ability to extract information from solely the output of a machine learning model to craft adversarial perturbations to black-box models is a practical threat against real-world systems, such as autonomous cars or machine learning models exposed as a service (MLaaS).
1 code implementation • 10 Dec 2021 • Viet Quoc Vo, Ehsan Abbasnejad, Damith C. Ranasinghe
In our study, we first deep dive into recent state-of-the-art decision-based attacks in ICLR and SP to highlight the costly nature of discovering low distortion adversarial employing gradient estimation methods.