1 code implementation • 28 Aug 2022 • Yinghua Zhang, Yangqiu Song, Kun Bai, Qiang Yang
To successfully attack fine-tuned models under both settings, we propose to first train an adversarial generator against the source model, which adopts an encoder-decoder architecture and maps a clean input to an adversarial example.
1 code implementation • 25 Aug 2020 • Yinghua Zhang, Yangqiu Song, Jian Liang, Kun Bai, Qiang Yang
To systematically measure the effect of both white-box and black-box attacks, we propose a new metric to evaluate how transferable are the adversarial examples produced by a source model to a target model.
1 code implementation • 12 Mar 2020 • Yinghua Zhang, Yu Zhang, Ying WEI, Kun Bai, Yangqiu Song, Qiang Yang
Though the learned representations are separable in the source domain, they usually have a large variance and samples with different class labels tend to overlap in the target domain, which yields suboptimal adaptation performance.
no code implementations • 23 Apr 2018 • Yinghua Zhang, Yu Zhang, Qiang Yang
Unfortunately, the transferability is usually defined as discrete states and it differs with domains and network architectures.