Deep Tabular Learning

Self-Normalizing Neural Networks

Introduced by Klambauer et al. in Self-Normalizing Neural Networks

Self-normalizing neural networks (SNNs) are a type of neural architecture that aim to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are “scaled exponential linear units” (SELUs), which induce self-normalizing properties. Using the Banach fixed point theorem, it's possible to prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance — even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization schemes, and (3) to make learning highly robust.

Source: Self-Normalizing Neural Networks

Papers


Paper Code Results Date Stars

Components


Component Type
SELU
Activation Functions

Categories