Search Results for author: Linxi Jiang

Found 4 papers, 2 papers with code

Imbalanced Gradients: A New Cause of Overestimated Adversarial Robustness

no code implementations28 Sep 2020 Linxi Jiang, Xingjun Ma, Zejia Weng, James Bailey, Yu-Gang Jiang

Evaluating the robustness of a defense model is a challenging task in adversarial robustness research.

Adversarial Robustness

Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness

1 code implementation24 Jun 2020 Xingjun Ma, Linxi Jiang, Hanxun Huang, Zejia Weng, James Bailey, Yu-Gang Jiang

Evaluating the robustness of a defense model is a challenging task in adversarial robustness research.

Adversarial Robustness

Heuristic Black-box Adversarial Attacks on Video Recognition Models

1 code implementation21 Nov 2019 Zhipeng Wei, Jingjing Chen, Xingxing Wei, Linxi Jiang, Tat-Seng Chua, Fengfeng Zhou, Yu-Gang Jiang

To overcome this challenge, we propose a heuristic black-box attack model that generates adversarial perturbations only on the selected frames and regions.

Adversarial Attack Video Recognition

Black-box Adversarial Attacks on Video Recognition Models

no code implementations10 Apr 2019 Linxi Jiang, Xingjun Ma, Shaoxiang Chen, James Bailey, Yu-Gang Jiang

Using three benchmark video datasets, we demonstrate that V-BAD can craft both untargeted and targeted attacks to fool two state-of-the-art deep video recognition models.

Video Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.