Search Results for author: Xixiang Lv

Found 2 papers, 0 papers with code

Challenging the adversarial robustness of DNNs based on error-correcting output codes

no code implementations26 Mar 2020 Bowen Zhang, Benedetta Tondi, Xixiang Lv, Mauro Barni

The existence of adversarial examples and the easiness with which they can be generated raise several security concerns with regard to deep learning systems, pushing researchers to develop suitable defense mechanisms.

Adversarial Attack Adversarial Robustness +2

Cannot find the paper you are looking for? You can Submit a new open access paper.