no code implementations • 28 Aug 2023 • Nathan Inkawhich, Gwendolyn McDonald, Ryan Luley
We show our attacks to be potent in whitebox and blackbox settings, as well as when transferred across foundational model types (e. g., attack DINOv2 with CLIP)!
1 code implementation • 25 Mar 2023 • Jingyang Zhang, Nathan Inkawhich, Randolph Linderman, Ryan Luley, Yiran Chen, Hai Li
Building up reliable Out-of-Distribution (OOD) detectors is challenging, often requiring the use of OOD data during training.
no code implementations • 1 Jan 2021 • Jing Lin, Ryan Luley, Kaiqi Xiong
To check the performance of the proposed method under an adversarial setting, i. e., malicious mislabeling and data poisoning attacks, we perform an extensive evaluation on the reduced CIFAR-10 dataset, which contains only two classes: airplane and frog.