Search Results for author: Shunchang Liu

Found 4 papers, 2 papers with code

Boosting Cross-task Transferability of Adversarial Patches with Visual Relations

no code implementations11 Apr 2023 Tony Ma, Songze Li, Yisong Xiao, Shunchang Liu

The transferability of adversarial examples is a crucial aspect of evaluating the robustness of deep learning systems, particularly in black-box scenarios.

Image Captioning Object Recognition +3

Benchmarking the Robustness of Quantized Models

no code implementations8 Apr 2023 Yisong Xiao, Tianyuan Zhang, Shunchang Liu, Haotong Qin

To address this gap, we thoroughly evaluated the robustness of quantized models against various noises (adversarial attacks, natural corruptions, and systematic noises) on ImageNet.

Benchmarking Quantization

Harnessing Perceptual Adversarial Patches for Crowd Counting

1 code implementation16 Sep 2021 Shunchang Liu, Jiakai Wang, Aishan Liu, Yingwei Li, Yijie Gao, Xianglong Liu, DaCheng Tao

Crowd counting, which has been widely adopted for estimating the number of people in safety-critical scenes, is shown to be vulnerable to adversarial examples in the physical world (e. g., adversarial patches).

Crowd Counting

Cannot find the paper you are looking for? You can Submit a new open access paper.