Activation Functions

Exponential Linear Unit

Introduced by Clevert et al. in Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)

The Exponential Linear Unit (ELU) is an activation function for neural networks. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information.

The exponential linear unit (ELU) with $0 < \alpha$ is:

$$f\left(x\right) = x \text{ if } x > 0$$ $$\alpha\left(\exp\left(x\right) − 1\right) \text{ if } x \leq 0$$

Source: Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)

Papers


Paper Code Results Date Stars

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories