Search Results for author: Chenyi Zhang

Found 4 papers, 1 papers with code

A Mask-Based Adversarial Defense Scheme

no code implementations21 Apr 2022 Weizhen Xu, Chenyi Zhang, Fangzhen Zhao, Liangda Fang

Adversarial attacks hamper the functionality and accuracy of Deep Neural Networks (DNNs) by meddling with subtle perturbations to their inputs. In this work, we propose a new Mask-based Adversarial Defense scheme (MAD) for DNNs to mitigate the negative effect from adversarial attacks.

Adversarial Attack Adversarial Defense +1

Escape saddle points by a simple gradient-descent based algorithm

no code implementations NeurIPS 2021 Chenyi Zhang, Tongyang Li

Compared to the previous state-of-the-art algorithms by Jin et al. with $\tilde{O}((\log n)^{4}/\epsilon^{2})$ or $\tilde{O}((\log n)^{6}/\epsilon^{1. 75})$ iterations, our algorithm is polynomially better in terms of $\log n$ and matches their complexities in terms of $1/\epsilon$.

A Uniform Framework for Anomaly Detection in Deep Neural Networks

1 code implementation6 Oct 2021 Fangzhen Zhao, Chenyi Zhang, Naipeng Dong, Zefeng You, Zhenxin Wu

Deep neural networks (DNN) can achieve high performance when applied to In-Distribution (ID) data which come from the same distribution as the training set.

Adversarial Attack Anomaly Detection

Quantum algorithms for escaping from saddle points

no code implementations20 Jul 2020 Chenyi Zhang, Jiaqi Leng, Tongyang Li

Compared to the classical state-of-the-art algorithm by Jin et al. with $\tilde{O}(\log^{6} (n)/\epsilon^{1. 75})$ queries to the gradient oracle (i. e., the first-order oracle), our quantum algorithm is polynomially better in terms of $\log n$ and matches its complexity in terms of $1/\epsilon$.

Cannot find the paper you are looking for? You can Submit a new open access paper.