1 code implementation • 21 Jul 2022 • Xiruo Liu, Shibani Singh, Cory Cornelius, Colin Busho, Mike Tan, Anindya Paul, Jason Martin
Existing adversarial example research focuses on digitally inserted perturbations on top of existing natural image datasets.
no code implementations • 8 Jan 2021 • Marissa Dotter, Sherry Xie, Keith Manville, Josh Harguess, Colin Busho, Mikel Rodriguez
In other words, is there a way to find a signal in these attacks that exposes the attack algorithm, model architecture, or hyperparameters used in the attack?