Lipschitz Certificates for Neural Network Structures Driven by Averaged Activation Operators

3 Mar 2019  ·  Patrick L. Combettes, Jean-Christophe Pesquet ·

Deriving sharp Lipschitz constants for feed-forward neural networks is essential to assess their robustness in the face of adversarial inputs. To derive such constants, we propose a model in which the activation operators are nonexpansive averaged operators, an assumption which is shown to cover most practical instances. By exploiting the averagedness of each activation operator, our analysis finely captures the interactions between the layers, yielding tighter Lipschitz constants than those resulting from the product of individual bounds for groups of layers. These constants are further improved in the case of separable structures. The proposed framework draws on tools from nonlinear operator theory, convex analysis, and monotone operator theory.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Optimization and Control