Search Results for author: Jeffrey Bickford

Found 2 papers, 1 papers with code

Adversarial Robustness with Non-uniform Perturbations

1 code implementation NeurIPS 2021 Ecenaz Erdemir, Jeffrey Bickford, Luca Melis, Sergul Aydore

Robustness of machine learning models is critical for security related applications, where real-world adversaries are uniquely focused on evading neural network based detectors.

Adversarial Robustness Malware Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.