Search Results for author: Zecheng He

Found 12 papers, 5 papers with code

Adversarial Medical Image with Hierarchical Feature Hiding

1 code implementation4 Dec 2023 Qingsong Yao, Zecheng He, Yuexiang Li, Yi Lin, Kai Ma, Yefeng Zheng, S. Kevin Zhou

Interestingly, this vulnerability is a double-edged sword, which can be exploited to hide AEs.

Decision Making

Trainable Projected Gradient Method for Robust Fine-tuning

2 code implementations CVPR 2023 Junjiao Tian, Xiaoliang Dai, Chih-Yao Ma, Zecheng He, Yen-Cheng Liu, Zsolt Kira

To solve this problem, we propose Trainable Projected Gradient Method (TPGM) to automatically learn the constraint imposed for each layer for a fine-grained fine-tuning regularization.

Transfer Learning

Medical Aegis: Robust adversarial protectors for medical images

no code implementations22 Nov 2021 Qingsong Yao, Zecheng He, S. Kevin Zhou

To the best of our knowledge, Medical Aegis is the first defense in the literature that successfully addresses the strong adaptive adversarial example attacks to medical images.

CloudShield: Real-time Anomaly Detection in the Cloud

1 code implementation20 Aug 2021 Zecheng He, Ruby B. Lee

Once an anomaly is detected, to reduce alert fatigue, CloudShield automatically distinguishes between benign programs, known attacks, and zero-day attacks, by examining the prediction error distributions.

Anomaly Detection Cloud Computing

Smartphone Impostor Detection with Behavioral Data Privacy and Minimalist Hardware Support

no code implementations11 Mar 2021 Guangyuan Hu, Zecheng He, Ruby B. Lee

Impostors are attackers who take over a smartphone and gain access to the legitimate user's confidential and private information.

ActionBert: Leveraging User Actions for Semantic Understanding of User Interfaces

no code implementations22 Dec 2020 Zecheng He, Srinivas Sunkara, Xiaoxue Zang, Ying Xu, Lijuan Liu, Nevan Wichers, Gabriel Schubiner, Ruby Lee, Jindong Chen, Blaise Agüera y Arcas

Our methodology is designed to leverage visual, linguistic and domain-specific features in user interaction traces to pre-train generic feature representations of UIs and their components.

Retrieval

A Hierarchical Feature Constraint to Camouflage Medical Adversarial Attacks

1 code implementation17 Dec 2020 Qingsong Yao, Zecheng He, Yi Lin, Kai Ma, Yefeng Zheng, S. Kevin Zhou

Deep neural networks (DNNs) for medical images are extremely vulnerable to adversarial examples (AEs), which poses security concerns on clinical decision making.

Adversarial Attack Decision Making

Miss the Point: Targeted Adversarial Attack on Multiple Landmark Detection

1 code implementation10 Jul 2020 Qingsong Yao, Zecheng He, Hu Han, S. Kevin Zhou

A comprehensive evaluation on a public dataset for cephalometric landmark detection demonstrates that the adversarial examples generated by ATI-FGSM break the CNN-based network more effectively and efficiently, compared with the original Iterative FGSM attack.

Adversarial Attack

Sensitive-Sample Fingerprinting of Deep Neural Networks

no code implementations CVPR 2019 Zecheng He, Tianwei Zhang, Ruby Lee

Numerous cloud-based services are provided to help customers develop and deploy deep learning applications.

VerIDeep: Verifying Integrity of Deep Neural Networks through Sensitive-Sample Fingerprinting

no code implementations9 Aug 2018 Zecheng He, Tianwei Zhang, Ruby B. Lee

Even small weight changes can be clearly reflected in the model outputs, and observed by the customer.

Privacy-preserving Machine Learning through Data Obfuscation

no code implementations5 Jul 2018 Tianwei Zhang, Zecheng He, Ruby B. Lee

While it is prevalent to outsource model training and serving tasks in the cloud, it is important to protect the privacy of sensitive samples in the training dataset and prevent information leakage to untrusted third parties.

BIG-bench Machine Learning Privacy Preserving

Power-Grid Controller Anomaly Detection with Enhanced Temporal Deep Learning

no code implementations18 Jun 2018 Zecheng He, Aswin Raghavan, Guangyuan Hu, Sek Chai, Ruby Lee

Specifically, we first train a temporal deep learning model, using only normal HPC readings from legitimate processes that run daily in these power-grid systems, to model the normal behavior of the power-grid controller.

Anomaly Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.