Search Results for author: Yaguan Qian

Found 14 papers, 3 papers with code

F$^2$AT: Feature-Focusing Adversarial Training via Disentanglement of Natural and Perturbed Patterns

no code implementations23 Oct 2023 Yaguan Qian, Chenyu Zhao, Zhaoquan Gu, Bin Wang, Shouling Ji, Wei Wang, Boyang Zhou, Pan Zhou

We propose a Feature-Focusing Adversarial Training (F$^2$AT), which differs from previous work in that it enforces the model to focus on the core features from natural patterns and reduce the impact of spurious features from perturbed patterns.

Adversarial Robustness Disentanglement +2

Towards the Desirable Decision Boundary by Moderate-Margin Adversarial Training

no code implementations16 Jul 2022 Xiaoyu Liang, Yaguan Qian, Jianchang Huang, Xiang Ling, Bin Wang, Chunming Wu, Wassim Swaileh

Adversarial training, as one of the most effective defense methods against adversarial attacks, tends to learn an inclusive decision boundary to increase the robustness of deep learning models.

Robust Network Architecture Search via Feature Distortion Restraining

1 code implementation ECCV 2022 Yaguan Qian, Shenghui Huang, Bin Wang, Xiang Ling, Xiaohui Guan, Zhaoquan Gu, Shaoning Zeng, WuJie Zhou, Haijiang Wang

This process is modeled as a multi-objective bilevel optimization problem and a novel algorithm is proposed to solve this optimization.

Bilevel Optimization

Hessian-Free Second-Order Adversarial Examples for Adversarial Learning

no code implementations4 Jul 2022 Yaguan Qian, Yuqi Wang, Bin Wang, Zhaoquan Gu, Yuhan Guo, Wassim Swaileh

Extensive experiments conducted on the MINIST and CIFAR-10 datasets show that our adversarial learning with second-order adversarial examples outperforms other fisrt-order methods, which can improve the model robustness against a wide range of attacks.

Adversarial Attacks against Windows PE Malware Detection: A Survey of the State-of-the-Art

1 code implementation23 Dec 2021 Xiang Ling, Lingfei Wu, Jiangyu Zhang, Zhenqing Qu, Wei Deng, Xiang Chen, Yaguan Qian, Chunming Wu, Shouling Ji, Tianyue Luo, Jingzheng Wu, Yanjun Wu

Then, we conduct a comprehensive and systematic review to categorize the state-of-the-art adversarial attacks against PE malware detection, as well as corresponding defenses to increase the robustness of Windows PE malware detection.

Adversarial Attack Malware Detection +2

Edge-aware Guidance Fusion Network for RGB Thermal Scene Parsing

1 code implementation9 Dec 2021 WuJie Zhou, Shaohua Dong, Caie Xu, Yaguan Qian

Considering the importance of high level semantic information, we propose a global information module and a semantic information module to extract rich semantic information from the high-level features.

Scene Parsing Thermal Image Segmentation

RNAS: Robust Network Architecture Search beyond DARTS

no code implementations29 Sep 2021 Yaguan Qian, Shenghui Huang, Yuqi Wang, Simin Li

The vulnerability of Deep Neural Networks (DNNs) (i. e., susceptibility to adversarial attacks) severely limits the application of DNNs.

Versailles-FP dataset: Wall Detection in Ancient

no code implementations14 Mar 2021 Wassim Swaileh, Dimitrios Kotzinos, Suman Ghosh, Michel Jordan, Son Vu, Yaguan Qian

Since the first step in the building's or monument's 3D model is the wall detection in the floor plan, we introduce in this paper the new and unique Versailles FP dataset of wall groundtruthed images of the Versailles Palace dated between 17th and 18th century.

Person Re-identification based on Robust Features in Open-world

no code implementations22 Feb 2021 Yaguan Qian, Anlin Sun

At the same time, to verify the effectiveness of our method, we provide a miniature dataset which is closer to the real world and includes pedestrian changing clothes and cross-modality factor variables fusion.

Dynamic Time Warping feature selection +2

Towards Speeding up Adversarial Training in Latent Spaces

no code implementations1 Feb 2021 Yaguan Qian, Qiqi Shao, Tengteng Yao, Bin Wang, Shouling Ji, Shaoning Zeng, Zhaoquan Gu, Wassim Swaileh

Adversarial training is wildly considered as one of the most effective way to defend against adversarial examples.

An Adversarial Attack via Feature Contributive Regions

no code implementations1 Jan 2021 Yaguan Qian, Jiamin Wang, Xiang Ling, Zhaoquan Gu, Bin Wang, Chunming Wu

Recently, to deal with the vulnerability to generate examples of CNNs, there are many advanced algorithms that have been proposed.

Adversarial Attack

Visually Imperceptible Adversarial Patch Attacks on Digital Images

no code implementations2 Dec 2020 Yaguan Qian, Jiamin Wang, Bin Wang, Shaoning Zeng, Zhaoquan Gu, Shouling Ji, Wassim Swaileh

With this soft mask, we develop a new loss function with inverse temperature to search for optimal perturbations in CFR.

Cannot find the paper you are looking for? You can Submit a new open access paper.