Modeling and Eliminating Adversarial Examples using Function Theory of Several Complex Variables

29 Sep 2021  ·  Ramin Barati, Reza Safabakhsh, Mohammad Rahmati ·

The reliability of a learning model is key to the successful deployment of machine learning in various industries. Training a robust model, unaffected by adversarial attacks, requires a comprehensive understanding of the adversarial examples phenomenon. This paper presents a model and a solution for the existence and transfer of adversarial examples in analytic hypotheses. Grounded in the function theory of several complex variables, we propose the class of complex-valued holomorphic hypotheses as a natural way to represent the submanifold of the samples and the decision boundary simultaneously. To describe the mechanism in which the adversarial examples occur and transfer, we specialize the definitions of the optimal Bayes and the maximum margin classifiers to this class of hypotheses. The approach is validated initially on both synthetic and real-world classification problems using polynomials. Backed by theoretical and experimental results, we believe the analysis to apply to other classes of analytic hypotheses such as neural networks.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here