In this work, we investigate data augmentation techniques for the task of AD detection and perform an empirical evaluation of the different approaches on two kinds of models for both the text and audio domains.
We achieve this by searching for void regions and locating the obstacles that cause these shadows.
In particular, model-level LiDAR spoofing attacks aim to inject fake depth measurements to elicit ghost objects that are erroneously detected by 3D Object Detectors, resulting in hazardous driving decisions.
Our proposed framework enables clients to localize and quantify the private information leakage in a layer-wise manner, and enables a better understanding of the sources of information leakage in collaborative learning, which can be used by future studies to benchmark new attacks and defense mechanisms.
LiDARs play a critical role in Autonomous Vehicles' (AVs) perception and their safe operations.
Training deep neural networks via federated learning allows clients to share, instead of the original data, only the model trained on their data.
We present DarkneTZ, a framework that uses an edge device's Trusted Execution Environment (TEE) in conjunction with model partitioning to limit the attack surface against Deep Neural Networks (DNNs).
no code implementations • 28 Mar 2017 • Nan Zhang, Soteris Demetriou, Xianghang Mi, Wenrui Diao, Kan Yuan, Peiyuan Zong, Feng Qian, Xiao-Feng Wang, Kai Chen, Yuan Tian, Carl A. Gunter, Kehuan Zhang, Patrick Tague, Yue-Hsun Lin
We systemize this process, by proposing a taxonomy for the IoT ecosystem and organizing IoT security into five problem areas.
Cryptography and Security