Search Results for author: Ryan Luley

Found 3 papers, 1 papers with code

Adversarial Attacks on Foundational Vision Models

no code implementations28 Aug 2023 Nathan Inkawhich, Gwendolyn McDonald, Ryan Luley

We show our attacks to be potent in whitebox and blackbox settings, as well as when transferred across foundational model types (e. g., attack DINOv2 with CLIP)!

SIO: Synthetic In-Distribution Data Benefits Out-of-Distribution Detection

1 code implementation25 Mar 2023 Jingyang Zhang, Nathan Inkawhich, Randolph Linderman, Ryan Luley, Yiran Chen, Hai Li

Building up reliable Out-of-Distribution (OOD) detectors is challenging, often requiring the use of OOD data during training.

Out-of-Distribution Detection

Active Learning Under Malicious Mislabeling and Poisoning Attacks

no code implementations1 Jan 2021 Jing Lin, Ryan Luley, Kaiqi Xiong

To check the performance of the proposed method under an adversarial setting, i. e., malicious mislabeling and data poisoning attacks, we perform an extensive evaluation on the reduced CIFAR-10 dataset, which contains only two classes: airplane and frog.

Active Learning Data Poisoning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.