The Exponential Linear Unit (ELU) is an activation function for neural networks. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noiserobust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information.
The exponential linear unit (ELU) with $0 < \alpha$ is:
$$f\left(x\right) = x \text{ if } x > 0$$ $$\alpha\left(\exp\left(x\right) − 1\right) \text{ if } x \leq 0$$
Source: Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)Paper  Code  Results  Date  Stars 

Task  Papers  Share 

General Classification  3  12.00% 
Object Discovery  2  8.00% 
Sentence Embedding  1  4.00% 
Sentence Embeddings  1  4.00% 
Medical Diagnosis  1  4.00% 
EEG  1  4.00% 
MultiTask Learning  1  4.00% 
Style Transfer  1  4.00% 
Synthetic Data Generation  1  4.00% 
Component  Type 


🤖 No Components Found  You can add them if they exist; e.g. Mask RCNN uses RoIAlign 