Search Results for author: Mingyi Zhou

Found 8 papers, 6 papers with code

Investigating White-Box Attacks for On-Device Models

1 code implementation8 Feb 2024 Mingyi Zhou, Xiang Gao, Jing Wu, Kui Liu, Hailong Sun, Li Li

Our findings emphasize the need for developers to carefully consider their model deployment strategies, and use white-box methods to evaluate the vulnerability of on-device models.

Concealing Sensitive Samples against Gradient Leakage in Federated Learning

1 code implementation13 Sep 2022 Jing Wu, Munawar Hayat, Mingyi Zhou, Mehrtash Harandi

Federated Learning (FL) is a distributed learning paradigm that enhances users privacy by eliminating the need for clients to share raw, private data with the server.

Federated Learning Stochastic Optimization

Performance Evaluation of Adversarial Attacks: Discrepancies and Solutions

1 code implementation22 Apr 2021 Jing Wu, Mingyi Zhou, Ce Zhu, Yipeng Liu, Mehrtash Harandi, Li Li

Recently, adversarial attack methods have been developed to challenge the robustness of machine learning models.

Adversarial Attack

Local Label Point Correction for Edge Detection of Overlapping Cervical Cells

1 code implementation5 Oct 2020 Jiawei Liu, Huijie Fan, Qiang Wang, Wentao Li, Yandong Tang, Danbo Wang, Mingyi Zhou, Li Chen

The qualitative and quantitative experimental results show that our LLPC can improve the quality of manual labels and the accuracy of overlapping cell edge detection.

Cell Segmentation Edge Detection +2

Decision-based Universal Adversarial Attack

1 code implementation15 Sep 2020 Jing Wu, Mingyi Zhou, Shuaicheng Liu, Yipeng Liu, Ce Zhu

A single perturbation can pose the most natural images to be misclassified by classifiers.

Adversarial Attack

ProbaNet: Proposal-balanced Network for Object Detection

no code implementations6 May 2020 Jing Wu, Xiang Zhang, Mingyi Zhou, Ce Zhu

Candidate object proposals generated by object detectors based on convolutional neural network (CNN) encounter easy-hard samples imbalance problem, which can affect overall performance.

Object object-detection +1

Adversarial Imitation Attack

no code implementations28 Mar 2020 Mingyi Zhou, Jing Wu, Yipeng Liu, Xiaolin Huang, Shuaicheng Liu, Xiang Zhang, Ce Zhu

Then, the adversarial examples generated by the imitation model are utilized to fool the attacked model.

Adversarial Attack

DaST: Data-free Substitute Training for Adversarial Attacks

2 code implementations CVPR 2020 Mingyi Zhou, Jing Wu, Yipeng Liu, Shuaicheng Liu, Ce Zhu

In this paper, we propose a data-free substitute training method (DaST) to obtain substitute models for adversarial black-box attacks without the requirement of any real data.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.