In particular, model-level LiDAR spoofing attacks aim to inject fake depth measurements to elicit ghost objects that are erroneously detected by 3D Object Detectors, resulting in hazardous driving decisions.
In this paper, we introduce theoretically-motivated measures to quantify information leakages in both attack-dependent and attack-independent manners.
LiDARs play a critical role in Autonomous Vehicles' (AVs) perception and their safe operations.
Training deep neural networks via federated learning allows clients to share, instead of the original data, only the model trained on their data.
We present DarkneTZ, a framework that uses an edge device's Trusted Execution Environment (TEE) in conjunction with model partitioning to limit the attack surface against Deep Neural Networks (DNNs).
no code implementations • 28 Mar 2017 • Nan Zhang, Soteris Demetriou, Xianghang Mi, Wenrui Diao, Kan Yuan, Peiyuan Zong, Feng Qian, Xiao-Feng Wang, Kai Chen, Yuan Tian, Carl A. Gunter, Kehuan Zhang, Patrick Tague, Yue-Hsun Lin
We systemize this process, by proposing a taxonomy for the IoT ecosystem and organizing IoT security into five problem areas.
Cryptography and Security