Search Results for author: Dong Su

Found 5 papers, 3 papers with code

Reaching Data Confidentiality and Model Accountability on the CalTrain

no code implementations7 Dec 2018 Zhongshu Gu, Hani Jamjoom, Dong Su, Heqing Huang, Jialong Zhang, Tengfei Ma, Dimitrios Pendarakis, Ian Molloy

We also demonstrate that when malicious training participants tend to implant backdoors during model training, CALTRAIN can accurately and precisely discover the poisoned and mislabeled training data that lead to the runtime mispredictions.

Data Poisoning

Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models

2 code implementations ECCV 2018 Dong Su, huan zhang, Hongge Chen, Jin-Feng Yi, Pin-Yu Chen, Yupeng Gao

The prediction accuracy has been the long-lasting and sole standard for comparing the performance of different image classification models, including the ImageNet competition.

General Classification Image Classification

Defending Against Machine Learning Model Stealing Attacks Using Deceptive Perturbations

no code implementations31 May 2018 Taesung Lee, Benjamin Edwards, Ian Molloy, Dong Su

Machine learning models are vulnerable to simple model stealing attacks if the adversary can obtain output labels for chosen inputs.

BIG-bench Machine Learning

Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach

1 code implementation ICLR 2018 Tsui-Wei Weng, huan zhang, Pin-Yu Chen, Jin-Feng Yi, Dong Su, Yupeng Gao, Cho-Jui Hsieh, Luca Daniel

Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness.

Cannot find the paper you are looking for? You can Submit a new open access paper.