Search Results for author: Meiyan Xie

Found 5 papers, 4 papers with code

Defending against black-box adversarial attacks with gradient-free trained sign activation neural networks

1 code implementation1 Jan 2021 Yunzhe Xue, Meiyan Xie, Zhibo Yang, Usman Roshan

The non-transferability in our ensemble also makes it a powerful defense to substitute model black box attacks that we show require a much greater distortion than binary and full precision networks to bring our model to zero adversarial accuracy.

Adversarial Defense

Towards adversarial robustness with 01 loss neural networks

1 code implementation20 Aug 2020 Yunzhe Xue, Meiyan Xie, Usman Roshan

To further validate these results we subject all models to substitute model black box attacks under different distortion thresholds and find that the 01 loss network is the hardest to attack across all distortions.

Adversarial Robustness Binary Classification

On the transferability of adversarial examples between convex and 01 loss models

1 code implementation14 Jun 2020 Yunzhe Xue, Meiyan Xie, Usman Roshan

Indeed we see on MNIST that adversaries transfer between 01 loss and convex models more easily than on CIFAR10 and ImageNet which are likely to contain outliers.

Robust binary classification with the 01 loss

1 code implementation9 Feb 2020 Yunzhe Xue, Meiyan Xie, Usman Roshan

We show our algorithms to be fast and comparable in accuracy to the linear support vector machine and logistic loss single hidden layer network for binary classification on several image benchmarks, thus establishing that our method is on-par in test accuracy with convex losses.

Binary Classification Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.