Search Results for author: Yujie Ji

Found 4 papers, 0 papers with code

Model-Reuse Attacks on Deep Learning Systems

no code implementations2 Dec 2018 Yujie Ji, Xinyang Zhang, Shouling Ji, Xiapu Luo, Ting Wang

By empirically studying four deep learning systems (including both individual and ensemble systems) used in skin cancer screening, speech recognition, face verification, and autonomous steering, we show that such attacks are (i) effective - the host systems misbehave on the targeted inputs as desired by the adversary with high probability, (ii) evasive - the malicious models function indistinguishably from their benign counterparts on non-targeted inputs, (iii) elastic - the malicious models remain effective regardless of various system design choices and tuning strategies, and (iv) easy - the adversary needs little prior knowledge about the data used for system tuning or inference.

Cryptography and Security

EagleEye: Attack-Agnostic Defense against Adversarial Inputs (Technical Report)

no code implementations1 Aug 2018 Yujie Ji, Xinyang Zhang, Ting Wang

Deep neural networks (DNNs) are inherently vulnerable to adversarial inputs: such maliciously crafted samples trigger DNNs to misbehave, leading to detrimental consequences for DNN-powered systems.

Where Classification Fails, Interpretation Rises

no code implementations2 Dec 2017 Chanh Nguyen, Georgi Georgiev, Yujie Ji, Ting Wang

We believe that this work opens a new direction for designing adversarial input detection methods.

Classification General Classification

Modular Learning Component Attacks: Today's Reality, Tomorrow's Challenge

no code implementations25 Aug 2017 Xinyang Zhang, Yujie Ji, Ting Wang

Many of today's machine learning (ML) systems are not built from scratch, but are compositions of an array of {\em modular learning components} (MLCs).

Cannot find the paper you are looking for? You can Submit a new open access paper.