Swish is an activation function, $f(x) = x \cdot \text{sigmoid}(\beta x)$, where $\beta$ a learnable parameter. Nearly all implementations do not use the learnable parameter $\beta$, in which case the activation function is $x\sigma(x)$ ("Swish-1").
The function $x\sigma(x)$ is exactly the SiLU, which was introduced by other authors before the swish. See Gaussian Error Linear Units (GELUs) where the SiLU (Sigmoid Linear Unit) was originally coined, and see Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning and Swish: a Self-Gated Activation Function where the same activation function was experimented with later.
Source: Searching for Activation FunctionsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Classification | 86 | 11.57% |
Object Detection | 37 | 4.98% |
Classification | 31 | 4.17% |
Semantic Segmentation | 30 | 4.04% |
General Classification | 26 | 3.50% |
Deep Learning | 21 | 2.83% |
Instance Segmentation | 12 | 1.62% |
Prediction | 10 | 1.35% |
Decoder | 9 | 1.21% |