Search Results for author: Ryan Goldhahn

Found 6 papers, 0 papers with code

Real-Time Fully Unsupervised Domain Adaptation for Lane Detection in Autonomous Driving

no code implementations29 Jun 2023 Kshitij Bhardwaj, Zishen Wan, Arijit Raychowdhury, Ryan Goldhahn

While deep neural networks are being utilized heavily for autonomous driving, they need to be adapted to new unseen environmental conditions for which they were not trained.

Autonomous Driving Avg +2

Less is More: Data Pruning for Faster Adversarial Training

no code implementations23 Feb 2023 Yize Li, Pu Zhao, Xue Lin, Bhavya Kailkhura, Ryan Goldhahn

Deep neural networks (DNNs) are sensitive to adversarial examples, resulting in fragile and unreliable performance in the real world.

Efficient Multi-Prize Lottery Tickets: Enhanced Accuracy, Training, and Inference Speed

no code implementations26 Sep 2022 Hao Cheng, Pu Zhao, Yize Li, Xue Lin, James Diffenderfer, Ryan Goldhahn, Bhavya Kailkhura

Recently, Diffenderfer and Kailkhura proposed a new paradigm for learning compact yet highly accurate binary neural networks simply by pruning and quantizing randomly weighted full precision neural networks.

Mixture of Robust Experts (MoRE):A Robust Denoising Method towards multiple perturbations

no code implementations21 Apr 2021 Kaidi Xu, Chenan Wang, Hao Cheng, Bhavya Kailkhura, Xue Lin, Ryan Goldhahn

To tackle the susceptibility of deep neural networks to examples, the adversarial training has been proposed which provides a notion of robust through an inner maximization problem presenting the first-order embedded within the outer minimization of the training loss.

Adversarial Robustness Denoising

Certifiably-Robust Federated Adversarial Learning via Randomized Smoothing

no code implementations30 Mar 2021 Cheng Chen, Bhavya Kailkhura, Ryan Goldhahn, Yi Zhou

Federated learning is an emerging data-private distributed learning framework, which, however, is vulnerable to adversarial attacks.

Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.