Robust Deep Neural Networks Inspired by Fuzzy Logic

20 Nov 2019  ·  Minh Le ·

Deep neural networks have achieved impressive performance and become the de-facto standard in many tasks. However, troubling phenomena such as adversarial and fooling examples suggest that the generalization they make is flawed. I argue that among the roots of the phenomena are two geometric properties of common deep learning architectures: their distributed nature and the connectedness of their decision regions. As a remedy, I propose new architectures inspired by fuzzy logic that combine several alternative design elements. Through experiments on MNIST and CIFAR-10, the new models are shown to be more local, better at rejecting noise samples, and more robust against adversarial examples. Ablation analyses reveal behaviors on adversarial examples that cannot be explained by the linearity hypothesis but are consistent with the hypothesis that logic-inspired traits create more robust models.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here