Swish is an activation function, $f(x) = x \cdot \text{sigmoid}(\beta x)$, where $\beta$ a learnable parameter. Nearly all implementations do not use the learnable parameter $\beta$, in which case the activation function is $x\sigma(x)$ ("Swish-1").
The function $x\sigma(x)$ is exactly the SiLU, which was introduced by other authors before the swish. See Gaussian Error Linear Units (GELUs) where the SiLU (Sigmoid Linear Unit) was originally coined, and see Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning and Swish: a Self-Gated Activation Function where the same activation function was experimented with later.
Source: Searching for Activation FunctionsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Classification | 72 | 14.06% |
Object Detection | 35 | 6.84% |
General Classification | 27 | 5.27% |
Classification | 25 | 4.88% |
Semantic Segmentation | 24 | 4.69% |
Instance Segmentation | 10 | 1.95% |
Multi-Task Learning | 9 | 1.76% |
Quantization | 6 | 1.17% |
Image Generation | 6 | 1.17% |