Search Results for author: Yifei Fan

Found 4 papers, 2 papers with code

Verifying the Causes of Adversarial Examples

no code implementations19 Oct 2020 Honglin Li, Yifei Fan, Frieder Ganz, Anthony Yezzi, Payam Barnaghi

The robustness of neural networks is challenged by adversarial examples that contain almost imperceptible perturbations to inputs, which mislead a classifier to incorrect outputs in high confidence.

Density Estimation

An Adaptive View of Adversarial Robustness from Test-time Smoothing Defense

1 code implementation26 Nov 2019 Chao Tang, Yifei Fan, Anthony Yezzi

The safety and robustness of learning-based decision-making systems are under threats from adversarial examples, as imperceptible perturbations can mislead neural networks to completely different outputs.

Adversarial Robustness Decision Making

Towards an Understanding of Neural Networks in Natural-Image Spaces

1 code implementation27 Jan 2018 Yifei Fan, Anthony Yezzi

Two major uncertainties, dataset bias and adversarial examples, prevail in state-of-the-art AI algorithms with deep neural networks.

Philosophy

Cannot find the paper you are looking for? You can Submit a new open access paper.