1 code implementation • 12 Apr 2024 • Dipkamal Bhusal, Md Tanvirul Alam, Monish K. Veerabhadran, Michael Clifford, Sara Rampazzi, Nidhi Rastogi
However, we observe that both model predictions and feature attributions for input samples are sensitive to noise.
no code implementations • 7 Jan 2024 • Takami Sato, Sri Hrushikesh Varma Bhupathiraju, Michael Clifford, Takeshi Sugawara, Qi Alfred Chen, Sara Rampazzi
We evaluate the effectiveness of the ILR attack with real-world experiments against two major traffic sign recognition architectures on four IR-sensitive cameras.
no code implementations • 31 Oct 2022 • Dipkamal Bhusal, Rosalyn Shin, Ajay Ashok Shewale, Monish Kumar Manikya Veerabhadran, Michael Clifford, Sara Rampazzi, Nidhi Rastogi
Interpretability, trustworthiness, and usability are key considerations in high-stake security applications, especially when utilizing deep learning models.
no code implementations • 1 Mar 2022 • Nidhi Rastogi, Sara Rampazzi, Michael Clifford, Miriam Heller, Matthew Bishop, Karl Levitt
We present a model that explains \textit{certainty} and \textit{uncertainty} in sensor input -- a missing characteristic in data collection.