no code implementations • 25 Aug 2017 • Xinyang Zhang, Yujie Ji, Ting Wang
Many of today's machine learning (ML) systems are not built from scratch, but are compositions of an array of {\em modular learning components} (MLCs).
no code implementations • 2 Dec 2017 • Chanh Nguyen, Georgi Georgiev, Yujie Ji, Ting Wang
We believe that this work opens a new direction for designing adversarial input detection methods.
no code implementations • 1 Aug 2018 • Yujie Ji, Xinyang Zhang, Ting Wang
Deep neural networks (DNNs) are inherently vulnerable to adversarial inputs: such maliciously crafted samples trigger DNNs to misbehave, leading to detrimental consequences for DNN-powered systems.
no code implementations • 2 Dec 2018 • Yujie Ji, Xinyang Zhang, Shouling Ji, Xiapu Luo, Ting Wang
By empirically studying four deep learning systems (including both individual and ensemble systems) used in skin cancer screening, speech recognition, face verification, and autonomous steering, we show that such attacks are (i) effective - the host systems misbehave on the targeted inputs as desired by the adversary with high probability, (ii) evasive - the malicious models function indistinguishably from their benign counterparts on non-targeted inputs, (iii) elastic - the malicious models remain effective regardless of various system design choices and tuning strategies, and (iv) easy - the adversary needs little prior knowledge about the data used for system tuning or inference.
Cryptography and Security