no code implementations • 24 Aug 2023 • Gege Qi, Yuefeng Chen, Xiaofeng Mao, Binyuan Hui, Xiaodan Li, Rong Zhang, Hui Xue
Model Inversion (MI) attacks aim to recover the private training data from the target model, which has raised security concerns about the deployment of DNNs in practice.
no code implementations • 24 Jul 2023 • Gege Qi, Yuefeng Chen, Xiaofeng Mao, Xiaojun Jia, Ranjie Duan, Rong Zhang, Hui Xue
Developing a practically-robust automatic speech recognition (ASR) is challenging since the model should not only maintain the original performance on clean samples, but also achieve consistent efficacy under small volume perturbations and large domain shifts.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
1 code implementation • 16 Sep 2022 • Xiaofeng Mao, Yuefeng Chen, Ranjie Duan, Yao Zhu, Gege Qi, Shaokai Ye, Xiaodan Li, Rong Zhang, Hui Xue
For borrowing the advantage from NLP-style AT, we propose Discrete Adversarial Training (DAT).
Ranked #1 on Domain Generalization on Stylized-ImageNet
2 code implementations • CVPR 2022 • Xiaofeng Mao, Gege Qi, Yuefeng Chen, Xiaodan Li, Ranjie Duan, Shaokai Ye, Yuan He, Hui Xue
By using and combining robust components as building blocks of ViTs, we propose Robust Vision Transformer (RVT), which is a new vision transformer and has superior performance with strong robustness.
Ranked #24 on Domain Generalization on ImageNet-C
1 code implementation • 9 Mar 2021 • Gege Qi, Lijun Gong, Yibing Song, Kai Ma, Yefeng Zheng
However, a threat to these systems arises that adversarial attacks make CNNs vulnerable.
no code implementations • ICLR 2021 • Gege Qi, Lijun Gong, Yibing Song, Kai Ma, Yefeng Zheng
We further analyze the KL-divergence of the proposed loss function and find that the loss stabilization term makes the perturbations updated towards a fixed objective spot while deviating from the ground truth.