Search Results for author: Yinzhi Cao

Found 6 papers, 3 papers with code

Defending Medical Image Diagnostics against Privacy Attacks using Generative Methods

no code implementations4 Mar 2021 William Paul, Yinzhi Cao, Miaomiao Zhang, Phil Burlina

Machine learning (ML) models used in medical imaging diagnostics can be vulnerable to a variety of privacy attacks, including membership inference attacks, that lead to violations of regulations governing the use of medical data and threaten to compromise their effective deployment in the clinic.

Practical Blind Membership Inference Attack via Differential Comparisons

1 code implementation5 Jan 2021 Bo Hui, Yuchen Yang, Haolin Yuan, Philippe Burlina, Neil Zhenqiang Gong, Yinzhi Cao

The success of the former heavily depends on the quality of the shadow model, i. e., the transferability between the shadow and the target; the latter, given only blackbox probing access to the target model, cannot make an effective inference of unknowns, compared with MI attacks using shadow models, due to the insufficient number of qualified samples labeled with ground truth membership information.

Inference Attack Membership Inference Attack

Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems

no code implementations5 Dec 2017 Kexin Pei, Yinzhi Cao, Junfeng Yang, Suman Jana

Finally, we show that retraining using the safety violations detected by VeriVis can reduce the average number of violations up to 60. 2%.

BIG-bench Machine Learning Medical Diagnosis

DeepXplore: Automated Whitebox Testing of Deep Learning Systems

3 code implementations18 May 2017 Kexin Pei, Yinzhi Cao, Junfeng Yang, Suman Jana

First, we introduce neuron coverage for systematically measuring the parts of a DL system exercised by test inputs.

Malware Detection Self-Driving Cars

Cannot find the paper you are looking for? You can Submit a new open access paper.