no code implementations • 16 Jul 2019 • Yulong Cao, Chaowei Xiao, Benjamin Cyr, Yimeng Zhou, Won Park, Sara Rampazzi, Qi Alfred Chen, Kevin Fu, Z. Morley Mao
In contrast to prior work that concentrates on camera-based perception, in this work we perform the first security study of LiDAR-based perception in AV settings, which is highly important but unexplored.
no code implementations • 1 Mar 2022 • Nidhi Rastogi, Sara Rampazzi, Michael Clifford, Miriam Heller, Matthew Bishop, Karl Levitt
We present a model that explains \textit{certainty} and \textit{uncertainty} in sensor input -- a missing characteristic in data collection.
no code implementations • 31 Oct 2022 • Dipkamal Bhusal, Rosalyn Shin, Ajay Ashok Shewale, Monish Kumar Manikya Veerabhadran, Michael Clifford, Sara Rampazzi, Nidhi Rastogi
Interpretability, trustworthiness, and usability are key considerations in high-stake security applications, especially when utilizing deep learning models.
no code implementations • 24 Jan 2023 • Yan Long, Pirouz Naghavi, Blas Kojusner, Kevin Butler, Sara Rampazzi, Kevin Fu
Our paper characterizes the limits of acoustic information leakage caused by structure-borne sound that perturbs the POV of smartphone cameras.
no code implementations • 7 Jan 2024 • Takami Sato, Sri Hrushikesh Varma Bhupathiraju, Michael Clifford, Takeshi Sugawara, Qi Alfred Chen, Sara Rampazzi
We evaluate the effectiveness of the ILR attack with real-world experiments against two major traffic sign recognition architectures on four IR-sensitive cameras.