Search Results for author: Xiangyuan Yang

Found 6 papers, 1 papers with code

Improving the Transferability of Adversarial Examples via Direction Tuning

2 code implementations27 Mar 2023 Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao

Although considerable efforts have been developed on improving the transferability of adversarial examples generated by transfer-based adversarial attacks, our investigation found that, the big deviation between the actual and steepest update directions of the current transfer-based adversarial attacks is caused by the large update step length, resulting in the generated adversarial examples can not converge well.

Network Pruning

Fuzziness-tuned: Improving the Transferability of Adversarial Examples

no code implementations17 Mar 2023 Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao

In this paper, we first systematically investigated this issue and found that the enormous difference of attack success rates between the surrogate model and victim model is caused by the existence of a special area (known as fuzzy domain in our paper), in which the adversarial examples in the area are classified wrongly by the surrogate model while correctly by the victim model.

A Multi-Stage Triple-Path Method for Speech Separation in Noisy and Reverberant Environments

no code implementations7 Mar 2023 Zhaoxi Mu, Xinyu Yang, Xiangyuan Yang, Wenjing Zhu

In noisy and reverberant environments, the performance of deep learning-based speech separation methods drops dramatically because previous methods are not designed and optimized for such situations.

Denoising Speech Denoising +1

FACM: Intermediate Layer Still Retain Effective Features against Adversarial Examples

no code implementations2 Jun 2022 Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao

To enhance the robustness of the classifier, in our paper, a \textbf{F}eature \textbf{A}nalysis and \textbf{C}onditional \textbf{M}atching prediction distribution (FACM) model is proposed to utilize the features of intermediate layers to correct the classification.

Improving the Robustness and Generalization of Deep Neural Network with Confidence Threshold Reduction

no code implementations2 Jun 2022 Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao

The empirical and theoretical analysis demonstrates that the MDL loss improves the robustness and generalization of the model simultaneously for natural training.

Gradient Aligned Attacks via a Few Queries

no code implementations19 May 2022 Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao

Specifically, we propose a gradient aligned mechanism to ensure that the derivatives of the loss function with respect to the logit vector have the same weight coefficients between the surrogate and victim models.

Cannot find the paper you are looking for? You can Submit a new open access paper.